Berman 4th Edition Test Bank Answers - Interpersonal Communication 4th Edition Answer Key by Evan Berman. DOCX document preview.

Berman 4th Edition Test Bank Answers

Test Questions

About These Test Questions

This file contains draft questions for the chapters in Essential Statistics for Public Managers and Policy Analysts. They are in true/false format and can be given to students as part of their studying or exam preparation. Instructors are apt to prefer their own testing formats and hence may develop these items in ways that they believe are best suited for their own courses. I hope these questions help. They were developed based on the text rather than the Q&A in the workbook, Exercising Essential Statistics. We would like to hear from you regarding your testing methods.

—Evan Berman and XiaoHu Wang

Chapter 1: Why Statistics for Public Managers and Policy Analysts?

  1. Data are used for describing and analyzing problems.
  2. Data are used for describing policies and programs.
  3. Data are used for monitoring fraud and preventing progress.
  4. Data are used to improve program operations.
  5. Data are used for evaluating outcomes.
  6. Accreditation bodies such as the Network of Schools in Public Policy, Affairs, and Administration, recognizes the role of quantitative skills in ensuring students required competencies.
  7. The textbook discusses three competencies for data analysis.
  8. Managers do not need to know how to collect their own data.
  9. Managers and analysts will have to be familiar with data sources in their lines of business.
  10. Managers and analysts need to be able to analyze data and present the findings of their analysis.
  11. Technical skills are sufficient and essential to ensuring soundness of analysis.
  12. Statistics is the careful, systematic process of inquiry that leads to the discovery or interpretation of facts, behaviors, and theories.
  13. The textbook discusses four stages of proficiency.
  14. People at the “know nothing” stage of proficiency need to find useful examples of analysis that can help them.
  15. Sophisticated experts have found the right balance between the development of policies and programs and the use of objective data and analysis to further decision-making.
  16. There are three areas of ethical concern: (1) the integrity of purpose, (2) the integrity of the process of analysis and communication, and (3) the integrity of dealing with human subjects.
  17. Dual purposes means that analysts have hidden facts or changed data.
  18. Each year cases of scientific misconduct and fraud make headlines.
  19. Key ethical principles in research involving people are that their participation should be voluntary and based on informed consent.
  20. Institutional Review Boards oversee and regulate the quality of research.

Chapter 2: Research Design

SI = section introduction

  1. Research methodology is the science of methods for investigating phenomena (SI).
  2. The purpose of applied research is to develop new knowledge about phenomena such as problems, events, programs, or policies, and their relationships (SI).
  3. Research begins by asking questions (SI).
  4. Quantitative research methods involve the collection of data that can be analyzed using statistical methods (SI).
  5. Both quantitative and qualitative methods are indispensable in addressing questions of basic and applied research (SI).
  6. Research is fundamentally about establishing the nature of things.
  7. Variables are defined as empirically observable phenomena that vary.
  8. Attributes are defined as observable phenomena that do not vary.
  9. Descriptive analysis provides information about the nature of variables.
  10. Relationships involve specifying which variables are related to each other, and the ways in which they are related to each other.
  11. Relationships in social science are usually deterministic in nature.
  12. A single exception will normally disprove claims about relations in social science.
  13. Relationships also are distinguished as being either causal or associational.
  14. Distinguishing between independent and dependent variables is a cornerstone of research.
  15. Causation requires both (1) empirical (i.e., statistical) correlation and (2) a plausible cause-and-effect argument.
  16. A theory exists for just about every relationship in social science.
  17. Program evaluation involves three steps.
  18. Control variables are always dependent variables.
  19. Rival hypotheses are plausible counter explanations for relationships that are found.
  20. Classic experimental designs are widely used in public management and policy for determining the effect of new policies and programs.
  21. Statistics is the only way for dealing with rival hypotheses.
  22. If X causes Y (or in notation, X → Y), then X is called the dependent variable because it affects Y.
  23. Threats to external validity are defined as those that jeopardize the generalizability of study conclusions about program outcomes to other situations.
  24. Threats to internal validity are those that jeopardize the study conclusions about whether an intervention in fact caused a difference in the study population.

Chapter 3: Conceptualization and Measurement

  1. There are five types of measurement scales.
  2. Ordinal-level variables are usually considered to be continuous variables.
  3. Ill-defined categories are problematic for constructing scales.
  4. Variables are empirical, whereas concepts are abstract.
  5. Conceptualization is the practice of identifying different dimensions of a concept.
  6. Likert-type scales are a common type of continuous scale.
  7. We should develop scales that are incomplete, ambiguous, or overlapping in some way.
  8. Measurement validity simply means that something measures or reflects what it is intended to.
  9. Variables are abstract ideas that are observed indirectly, through concepts.
  10. The first step of concept measurement is operationalization.
  11. Analysts must justify their choices about the conceptualization and operationalization of study concepts.
  12. In conceptualization, no correct number of dimensions or variables exists, only bad or lacking ones.
  13. The theorem of the interchangeability of indicators states that most concepts have three to five dimensions.
  14. An index variable is a variable that combines the values of other variables into a single indicator or score.
  15. Index variables are also commonly used to empirically measure abstract concepts and multifaceted, encompassing phenomena.
  16. The term measurement variable refers to the (observed) variables that make up the index.
  17. Face validity means that index variables encompass a broad range of aspects.
  18. Comparison with external sources is sometimes called criterion (or external) validity.
  19. Cronbach α (also called α or measure α) is a statistical measure of internal validity.
  20. When one measurement variable has a missing value for an observation, a value of 0 is assigned to the index variable for that observation.
  21. Cronbach α values of .70 or higher are acceptable.
  22. Analysts are expected to use some strategy to assess measurement validity.

Chapter 4: Measuring and Managing Performance: Present and Future

  1. Performance measurement provides a real-time assessment of what a program or policy is doing.
  2. Performance measurement is used sparingly.
  3. Performance measurement developed from program evaluation.
  4. Performance measurement provides a system of key indicators of program activities and performance.
  5. The logic model defines a way to describe relationships among resources, activities, and results.
  6. Professional associations provide performance measures that agencies should use.
  7. Inputs are sometimes measured as personnel costs.
  8. Activities are defined as the processes, events, technologies, and actions that a program undertakes with its resources, to produce results.
  9. Outcomes are defined as the immediate, direct results of program activities.
  10. Goals are measures of program activities.
  11. Short-term outcomes are the same as long-term outcomes.
  12. Effectiveness is defined as the unit cost to produce a good or service.
  13. Benchmarks are standards against which performance is measured.
  14. Peer organizations are those that are one step above the class of one’s organization.
  15. Some managers mistake workload ratios for efficiency measures.
  16. Equity measures are used to compare performance across different groups.
  17. A forecast is defined as a prediction about the future.
  18. Whereas planning discusses what the future will look like, forecasting provides a normative model of what the future should look like.
  19. Forecasting methods are generally distinguished by whether they are based on statistical analysis or on expert judgments.
  20. Forecasting should use multiple methods.
  21. The longer the forecasting period, the more certain it is.

Chapter 5: Data Collection

  1. There are many sources of administrative data.
  2. Traditionally, administrative data have had three purposes: (1) to ensure that resources are not misused, (2) to determine the status of the organization’s activities, and (3) to provide a record of what has been completed and accomplished.
  3. Performance measurement usually requires that additional administrative data be collected.
  4. Administrative data are sometimes (1) missing or incomplete, (2) inaccurately reported, or (3) subject to definitions that have changed over time.
  5. Administrative data are seldom used in program evaluation and forecasting.
  6. Secondary data are data that are collected for some other purpose.
  7. Secondary data are often used to describe communities and populations in statistical terms.
  8. Common surveys are those of citizens, clients, businesses, and employees.
  9. Phone surveys can ask many questions.
  10. Administrative data have few problems of validity.
  11. The results of focus groups are generalizable.
  12. Phone surveys are most common because they have lower response rates than mail surveys.
  13. A sample is a selection, such as of citizens, from an entire population.
  14. A representative sample is one that has characteristics similar to those of the population as a whole.
  15. The purpose of purposive samples is to generalize.
  16. The best way to draw a sample is to ensure that the demographic characteristics of the sample and the population are exactly the same.
  17. Generalizability means that we can generalize from the population to the sample.
  18. Random sampling is a method for drawing representative samples.
  19. The list from which a sample is drawn is called a sampling frame.
  20. Sampling error is the percentage by which survey results vary in 95 of 100 samples.
  21. Larger samples have larger sampling errors than smaller ones.
  22. Nonresponse bias is the extent to which views of nonrespondents differ from those of respondents.

Chapter 6: Central Tendency

SI = section introduction

  1. Data cleaning follows data input, which follows data coding (SI).
  2. Univariate analysis is the study of two or more variables (SI).
  3. Univariate, descriptive statistics are sometimes also called summary statistics (SI).
  4. Data coding is the process of preparing data for input into statistical software programs (SI).
  5. Data input is the process of identifying and removing reporting and recording errors (SI).
  6. It is common practice to assume that unexamined data usually contain various errors that must be identified and removed (SI).
  7. Summary statistics are used to highlight important features about unusual observations.
  8. The mean is often used when analyzing continuous variables.
  9. The mean and median should never be reported at the same time.
  10. The mean is a commonly used statistic involving nominal variables.
  11. The mean is defined as the sum of a series of observations, divided by the number of observations of an array.
  12. When data are missing, a common practice is to estimate their values and include them in data analysis.
  13. Problems pertaining to the data can complicate calculations of the mean.
  14. The median should always be used when a few very large or very small values affect estimates of the mean.
  15. The location of the median is half the distance from the observation with the average value.
  16. Examples of the median are common in demographic studies of income.
  17. The mode is defined as the most frequent (typical) value(s) of a variable.
  18. The mode is used with nominal-level data.

Chapter7: Measures of Dispersion

  1. The mean is an example of a measure of dispersion.
  2. Frequency distributions describe the range and frequency of a variable’s values.
  3. Frequency distributions are readily calculated by statistical software programs.
  4. A histogram is similar to a stem-and-leaf plot but differs in that it shows the number of observations in each category.
  5. When recoding continuous variables into ordinal-level variables, the width of categories should be equal, unless compelling reasons exist to do otherwise.
  6. Pie charts are graphs that show the frequency of occurrences through stacks.
  7. Visual representation is important for analysis and in the communication of study findings to a broader audience.
  8. Boxplots are used for preliminary data analysis.
  9. The boxplot is used with ordinal-level data.
  10. The interquartile range is defined as the distance between the first and third quartiles.
  11. Values above the upper fence are defined as outliers in boxplots.
  12. The inner fence is an observed value.
  13. Observations flagged as outliers generally should be retained when they are not coding errors, when they are plausible values of the variable in question, and when they do not greatly affect the value of the mean.
  14. The normal distribution refers to the distribution of a variable that resembles a cigar-shaped curve.
  15. The standard deviation is a desirable statistic because metric variables are normally distributed.
  16. When variables are normally distributed, about 95% of observations lie ±5 standard deviations from the mean.
  17. Standardized variables have a mean of zero and a standard deviation of 1.
  18. Skewness is a measure of whether the peak is centered in the middle of the distribution.

Chapter 8: Contingency Tables

  1. Bivariate statistics is the study of two or more variables.
  2. A contingency table expresses the relationship between two categorical variables.
  3. Row and column totals in contingency tables are called the grand totals.
  4. The term data cell is commonly used to refer to table cells that show the counts or percentages based on the values of the two variables.
  5. The placement of the variables in contingency tables depends on the nature of the relationship.
  6. Typically, the dependent variable is placed in the column in contingency tables.
  7. Analysts should first determine what information they want to know, and then try to organize their contingency table to use column percentages to obtain that information.
  8. A statistical relationship means that as one variable changes, so too does another.
  9. In a positive relationship, small values of the column variable are associated with small values of the row variable, and large values of the column variable are often associated with large values of the row variable.
  10. A negative relationship means that large values of one variable are associated with large values of the other variable and that small values of one variable are associated with small values of the other variable.
  11. The direction of a relationship is easier to determine in large tables than in small ones.
  12. Searching for practical relevance and presenting results may involve some decisions regarding ethics.
  13. Pivot tables show statistics of one or more continuous variables for one or more categorical variables in the data cells.
  14. The term pivot is derived from the handy property that row and column variables can be readily transposed.
  15. A layer variable is one that defines the subset of data used for subsequent data tables.
  16. Creating more data cells decreases the possibility that some cells have very few observations.

Chapter 9: Getting Results

  1. Outputs can be used to measure the performance of a program.
  2. Outcomes are related to the program goals.
  3. An efficiency analysis assesses a level of outcome by output.
  4. An effectiveness measure concerns a level of output by input.
  5. A larger standard deviation indicates a consistent and reliable performance.
  6. Only measures of efficiency and effectiveness, not equity, should be included in a performance measurement system.
  7. Equity can be defined as the provision of equal access to resources for all social groups.
  8. Quality-of-life measurement consists of multiple measures.
  9. Performance in an organization is often unidimensional, so it can be measured well by one measure.
  10. When managers are asked to make forecasts on the basis of few previous observations, they should consider non-regression-based forecasting methods.
  11. Forecasting based on prior moving averages is more conservative than that based on prior moving changes.
  12. When forecasting, it is best to first deflate current expenditures.
  13. (Optional, Appendix) Forecasting can include periodic effects.

Chapter 10: Introducing Inference: Estimation from Samples

SI = section introduction

  1. The purpose of inferential statistics is to make inferences about characteristics of the population from which the data were drawn (SI).
  2. When both variables are categorical, the t-test should be used for hypothesis testing (SI).
  3. When both variables are continuous, the χ2 test should be used for hypothesis testing (SI).
  4. When both variables are continuous, simple regression can be used (SI).
  5. Multivariate statistics involve statistics for three or more variables (SI).
  6. It is practical to survey the entire population of a city.
  7. The population is the entire set of subjects of interest in any study, and a sample is a portion or subset of the population.
  8. Statistical inference is about drawing conclusions about a population from sample data.
  9. The best way to draw a sample is randomly—each subject should have an equal chance of being selected as part of your sample.
  10. We use population parameters to estimate unknown sample statistics.
  11. The estimation error is the difference between a sample statistic and a population parameter.
  12. A confidence interval is defined the range within which the unknown but true population parameter is estimated to lie.
  13. A 95% confidence interval is sometimes expressed as stating that when drawing many samples, 5% of the times the mean will lie within a confidence interval.
  14. The Central Limit Theorem is fundamental statistical insight that allows us to make inferences from a single sample to the population.
  15. A probability distribution is a statistical function which describes all possible values and likelihoods that a variable can take.
  16. A normal distribution (Appendix A) is an example of a probability distribution.
  17. The standard deviation of a probability distribution is also called a standard error.
  18. The formula for calculating a confidence interval for samples larger than 30 is .
  19. When the sample size is smaller than 30, the normal distribution is replaced by the t distribution for calculating confidence interval.

Chapter 11: Hypothesis Testing with Chi-Square

  1. When using the χ2 test statistic, “expected” frequencies are those that researchers observe in the sample.
  2. The χ2 test statistic is useful for determining the direction of relationships.
  3. The concept of statistical significance relates to the matter of confidence.
  4. The null hypothesis is stated in Greek letters because the null hypothesis refers to relations in the population.
  5. The first step in hypothesis testing is stating the null hypothesis.
  6. The last step in hypothesis testing is looking up the critical value of test statistics.
  7. When two variables are unrelated to each other, the χ2 test statistic is large.
  8. A 5% level of significance suggests a higher level of confidence than a 1% level of significance.
  9. The critical value is compared against the test statistic for the purpose of hypothesis testing.
  10. A relationship that is reported as being significant at “p = 0.945” is statistically significant.
  11. Degree of freedom is a concept associated with determining the critical value.
  12. Generally, a critical value will be larger at the 1% level of significance than at the 5% level of significance.
  13. It is easier to reject the null hypothesis with a large sample than with a small sample.
  14. We first determine practical significance and then determine statistical significance.
  15. Practical significance examines the strength of relationships.
  16. Practical significance examines by how much one variable changes as a result of changing another.
  17. A goodness-of-fit test tests can use χ2 to determine whether a program or policy exceeds a standard or norm.
  18. Kendall’s τ-c is a nonparametric statistic which should never be used because it is less useful than χ2.
  19. Kendall’s τ-c can vary from +1.00 to –1.00. Scores of less than |0.25| indicate weak relationships; scores between |0.25| and |0.50| indicate moderate relationships; and scores of greater than |0.50| indicate strong relationships.
  20. (Optional, Appendix): other nonparametric examples include Kendals τ-b, Goodman–Kruskal’s τ and the McNemar test.
  21. (Optional, Appendix): When adding a control variable causes a previously significant relationship to become insignificant, the result is called explanation.

Chapter 12: The T-Test

  1. When both variables are categorical, the t-test should be used for hypothesis testing.
  2. The term parametric statistics refers to tests that make assumptions about the distribution of data.
  3. Recoding continuous variables as categorical variables is discouraged because it results in a loss of information.
  4. The t-test has four test assumptions.
  5. The critical values of the t-test are provided by Student’s t-test distribution.
  6. One-tailed tests are used most often, unless compelling a priori knowledge exists or it is known that one group cannot have a larger mean than the other.
  7. A test for the equality of variances is the Levene’s test.
  8. The term robust is used, generally, to describe the extent to which test conclusions are unaffected by departures from test assumptions.
  9. A combination of visual inspection and statistical testing should always be used to determine normality.
  10. The Kolmogorov–Smirnov test is a test of equal variance.
  11. Nonnormality is sometimes overcome through variable transformation.
  12. When problems of nonnormality cannot be resolved adequately, analysts should consider nonparametric alternatives to the t-test.
  13. Analysts should always examine the robustness of their findings.
  14. All t-tests first test for equality of means and then test for equality of variances.
  15. The paired samples t-test tests the null hypothesis that the mean difference between the before and after test scores is zero.
  16. A paired t-test violates the assumption of homogeneity.
  17. The one-sample t-test tests whether the mean of a single variable is different from a prespecified value (norm).
  18. The Mann–Whitney and Wilcoxon tests are equivalent.
  19. The Mann–Whitney and Wilcoxon tests assign ranks to the testing variable and test whether the sums of ranks differ between the two categories.
  20. The signed rank test is an independent samples test that examines differences of mean ranks to evaluate whether two samples come from the same population.

Chapter 13: Analysis of Variance (ANOVA)

  1. Analysis of variance is used to test the equality of means across three or more categories.
  2. The global F-test tests whether all groups have significantly different means.
  3. A post hoc test tests all possible pairs of mean differences.
  4. Tukey, Bonferroni, and Scheffe are the names of three popular t-tests.
  5. Analysis of variance is not appropriate for testing the statistical significance of continuous ordinal variable relationships.
  6. Analysis of variance appears to be less robust than the t-test for deviations from normality.
  7. Our main concern with homogeneity is that there are no substantial differences in the amount of variance across the groups.
  8. The Levene’s test is a test for testing whether variances across groups are equal.
  9. The term “homogeneous subsets” means that groups that have statistically identical means.
  10. The Levene’s test is a test for testing whether changes in means are linear.
  11. Multiple analysis of variance involves the analysis of more than one independent variable on a single dependent variable.
  12. An interaction effect between two variables describes the way that variables “work together” to have an effect on the dependent variable.
  13. Two-way analysis of variance, for example, allows for testing of the effect of two different independent variables on the dependent variable.
  14. A nonparametric alternative to one-way analysis of variance (ANOVA) is Kruskal–Wallis’ H test of one-way ANOVA.
  15. Kruskal–Wallis’ H test reports mean values of the dependent variable and identifies heterogeneous subsets.

Chapter 14: Simple Regression

  1. Simple regression is appropriate for examining the bivariate relationships between two continuous variables.
  2. A scatterplot is a plot of the data points of two continuous variables.
  3. The slope of the regression line is also called the regression coefficient.
  4. The null hypothesis in regression is that the intercept is zero.
  5. The slope indicates the steepness of the regression line.
  6. A negative slope indicates an upward sloping line.
  7. The statistical significance of regression slopes is indeterminable.
  8. R2 is called the coefficient of determination.
  9. R2 is the percent variation of the dependent variable explained by the independent variable(s).
  10. R2 varies from 1 to 2.
  11. A perfect fit is indicated when the coefficient of determination is zero.
  12. The standard error of the estimate is a measure of the spread of the y values around the regression line as calculated for the mean value of the independent variable, only, and assuming a large sample.
  13. A regression line assumes a linear relationship that is constant over the range of observations.
  14. The dependent variable is also called the error term.
  15. Pearson’s correlation coefficient, r, measures the association (significance, direction, and strength) between two continuous variables.
  16. R = r and |R| = |r|.
  17. The Pearson’s correlation coefficient, r, always has the same sign as b.
  18. The Spearman rank order correlation coefficient tests whether the rank orders of responses of two variables are statistically associated.
  19. The Spearman rank order correlation coefficient is appropriate for nominal-, ordinal-, and continuous-level variables.

Chapter 15: Multiple Regression

  1. Multiple regression is one of the most widely used multivariate statistical techniques for analyzing three or more variables.
  2. Full model specification means that all variables are measured that affect the dependent variable.
  3. A nomothetic mode of explanation isolates the most important factors.
  4. The search for parsimonious explanations often leads analysts to first identify different categories of factors that most affect their dependent variable.
  5. The error term accounts for all variables not specified in the model.
  6. The assumption of full model specification is that variables not specified in the model are justifiably omitted only when their cumulative effect on the dependent variable is zero.
  7. Each of the regression coefficients is interpreted as its effect on the dependent variable, controlled for the effect of all of the other independent variables included in the regression.
  8. It is okay for independent variables not to be correlated with the dependent variables, as long as they are highly correlated with each other.
  9. The error term plot shows the relationship between the predicted dependent variable and the error term.
  10. The lack of a pattern in the error term plot that is distributed around (0,0) indicates that the net effect of all variables excluded from the model on the dependent variable is zero.
  11. In multiple regression, the adjusted R2 controls for the number of dependent variables.
  12. Values of R2 adjusted below .20 are considered to suggest weak model fit, those between .20 and .40 indicate moderate fit, those above .40 indicate strong fit, and those above .65 indicate very strong model fit.
  13. Standardized coefficients enable analysts to draw inferences about the relative impact of different independent variables on the dependent variable.
  14. It is common to compare β coefficients across different models.
  15. The global F-test examines the overall effect of all independent variables jointly on the dependent variable.
  16. A dummy variable can have only two values.
  17. If a nominal variable has five categories, an analyst would include up to four dummy variables in a regression model.
  18. The regression coefficient of a dummy variable is interpreted as the effect of that variable on the dependent variable, controlled for all other variables in the model.
  19. Outliers can affect the slope of regression coefficients.
  20. Outliers are observations whose multiple regression residuals exceed three standard deviations.
  21. When two variables are multicollinear, they are strongly correlated with each other.
  22. When two variables are strongly correlated with each other, they are also multicollinear.
  23. Curvelinearity is indicated by residuals that are linearly related to each other.
  24. Curvelinearity is addressed by transforming one of the independent variables.
  25. Heteroscedasticity occurs when one of the dependent variables is linearly related to the independent variable.
  26. Heteroscedasticity is addressed by transforming both the dependent and the independent variables.
  27. It is okay to include irrelevant variables as long as they are significant.
  28. The effect of omitting a relevant variable is to inflate the value of variables that are included.
  29. Autocorrelation is common with time series data.

Chapter 16: Logistic and Time Series Regression

  1. Logistic regression deals with situations in which the dependent variable is dichotomous.
  2. Logistic regression is often used in political science.
  3. The dichotomous nature of the dependent variable violates an assumption of multiple regression.
  4. Logistic regression fits a U-shaped curve to the data.
  5. Logistic regression is the same as multiple regression with dummy variables.
  6. The Nagelkerke R2 is analogous to R2 in multiple regression.
  7. Classification tables should have 50% or fewer correctly predicted values.
  8. The Hosmer and Lemeshow test compares the observed and predicted values and should be statistically significant.
  9. Wald χ2 is the test statistic used in logistic regression for testing the statistical significance of logistic regression coefficients.
  10. Logistic regression can be used to predict the likelihood of an event occurring.
  11. The odds ratio compares the probability of something occurring, as compared to it not occurring.
  12. The same principles that apply to multiple regression, such as full model specification, also apply to time series regression.
  13. With time series data, the assumption of random distribution of error terms is usually violated.
  14. Autocorrelation is detected by examining the variance inflation factor.
  15. Values of the Durbin–Watson statistic around 2 indicate autocorrelation.
  16. The Durbin–Watson statistic has a range of values for which test statistics are inconclusive.
  17. Autocorrelation is a problem, but serial correlation is not.
  18. It is better to correct for autocorrelation by taking first differences than by examining relationships in levels form.
  19. Policies and program are sometimes evaluated by including dummy variables in the model.
  20. A step impact variable is similar to an increasing impact variable.
  21. Lagged variables are variables whose effect becomes manifest at some future time.

Chapter 17: Survey of Other Techniques

  1. Path analysis uses ordinary least squares for estimating parameters.
  2. There is no real difference between an endogenous variable and an independent variable.
  3. A causal model with feedback loops is called a nonrecursive model.
  4. Exogenous variables are variables that are unaffected by other variables in the model.
  5. Indirect effects in path analysis are calculated as the product of β coefficients of each pathway.
  6. Path analysis cannot be used when a causal model has one or more feedback loops.
  7. The total effect in path analysis is the difference between the direct and indirect effects.
  8. Structural equation models are nonrecursive models.
  9. The simplest forecasting method is to forecast the immediate future as a replica of the past and present.
  10. The technique of trend extrapolation analyzes the pattern of past trends.
  11. Trend forecasts are best when used to estimate many periods into the future.
  12. Forecasting is more useful when it also considers a variety of what-if scenarios.
  13. Validation is critical in any forecasting.
  14. Curve estimation cannot take periodicity into account.
  15. Exponential smoothing cannot take periodicity into account.
  16. Autoregressive integrated moving average can take both periodicity and independent variables into account.
  17. Censored data involve data in which a dichotomous event has not yet occurred.
  18. Survival analysis is a technique used for analyzing censored data.
  19. Probability density is an estimate of the probability of the terminal event occurring during a time interval.
  20. When the time variable is continuous, Kaplan–Meier survival analysis is used.
  21. Factor analysis is an exploratory technique.
  22. Factor analysis aids in creating index variables.
  23. Factor analysis uses correlations among variables to identify subgroups.
  24. Rotation causes variables to load higher on one factor, and less on others, bringing the pattern of groups better into focus for interpretation.
  25. For purposes of interpretation, factor loadings are considered only if their values are at least 5.0.
  26. A task of analysts is to name the factors that arise from factor analysis.

Document Information

Document Type:
DOCX
Chapter Number:
All in one
Created Date:
Aug 21, 2025
Chapter Name:
Test Bank Exercising Essential Statistics 4e
Author:
Evan Berman

Connected Book

Interpersonal Communication 4th Edition Answer Key

By Evan Berman

Test Bank General
View Product →

Explore recommendations drawn directly from what you're reading

$24.99

100% satisfaction guarantee

Buy Full Test Bank

Benefits

Immediately available after payment
Answers are available after payment
ZIP file includes all related files
Files are in Word format (DOCX)
Check the description to see the contents of each ZIP file
We do not share your information with any third party