Ch15 Exam Prep Time-Series Forecasting And Index Numbers - Business Stats Contemporary Decision 10e | Test Bank by Ken Black by Ken Black. DOCX document preview.

Ch15 Exam Prep Time-Series Forecasting And Index Numbers

File: Ch15, Chapter 15: Time-Series Forecasting and Index Numbers

True/False

1. Time-series data are data gathered on a desired characteristic at a particular point in time.

Response: See section 15.1 Introduction to Forecasting

Difficulty: Easy

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

2. The long-term general direction of data is referred to as series.

Response: See section 15.1 Introduction to Forecasting

Difficulty: Easy

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

3. A stationary time-series data has only trend, but no cyclical or seasonal effects.

Response: See section 15.1 Introduction to Forecasting

Difficulty: Medium

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

4. Forecast error is the difference between the value of the response variable and those of the explanatory variables.

Response: See section 15.1 Introduction to Forecasting

Difficulty: Easy

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

5. For large datasets, the mean error (ME) and mean absolute deviation (MAD) always have the same numerical value.

Response: See section 15.1 Introduction to Forecasting

Difficulty: Easy

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

6. Naïve forecasting models have no useful applications because they do not take into account data trend, cyclical effects or seasonality.

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing

7. When a trucking firm uses the number of shipments for January of the previous year as the forecast for January next year, it is using a naïve forecasting model.

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

8. Two popular general categories of smoothing techniques are averaging models and exponential models.

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

9. Two popular general categories of smoothing techniques are exponential models and logarithmic models.

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

10. An exponential smoothing technique in which the smoothing constant alpha is equal to one is equivalent to a regression forecasting model.

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

11. Linear regression models cannot be used to analyze quadratic trends in time-series data.

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

12. Although seasonal effects can confound a trend analysis, a regression model is robust to these effects and the researcher does not need to adjust for seasonality prior to using a regression model to analyze trends.

Response: See section 15.3 Trend Analysis

Difficulty: Medium

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method

13. If the trend equation is quadratic in time t=1….T, the forecast value for the next time period, T+1, depends on time T.

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

14. If the trend equation is linear in time, the slope indicates the increase, or decrease when negative, in the forecasted value of the response value Y for the next time period.

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

15. One of the main techniques for isolating the effects of seasonality is reconstitution.

Response: See section 15.4 Seasonal Effects

Difficulty: Easy

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

16. One of the main techniques for isolating the effects of seasonality is decomposition.

Response: See section 15.4 Seasonal Effects

Difficulty: Easy

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

17. The first step of isolating seasonal effects is to remove the trend and cycles effects.

Ans: True

Response: See section 15.4 Seasonal Effects

Difficulty: Easy

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

18. Once the seasonal effects have been isolated, these effects can be removed from the original data through desensitizing.

Ans: False

Response: See section 15.4 Seasonal Effects

Difficulty: Easy

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

19. When the error terms of a regression forecasting model are correlated the problem of autocorrelation occurs.

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

20. If autocorrelation occurs in regression analysis, then the confidence intervals and tests using the t and F distributions are no longer strictly applicable.

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

21. One of the ways to overcome the autocorrelation problem in a regression forecasting model is to increase the level of significance for the F test

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

22. One of the ways to overcome the autocorrelation problem in a regression forecasting model is to transform the variables by taking the first-differences.

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

23. Autoregression is a multiple regression technique in which the independent variables are time-lagged versions of the dependent variable.

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

24. Autocorrelation in a regression forecasting model can be detected by the F test.

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

25. In statistics, the Winters’ Three Parameter statistic is a test statistic used to detect the presence of autocorrelation in the residuals from a regression analysis.

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

26. A small value of the Durbin–Watson statistic indicates that successive error terms are positively correlated.

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

27. Unweighted price indexes can only compare across the entire successive time period for which there is data.

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

28. Index numbers are used to compare various time frame measures to a base time period measure.

Ans: True

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

29. A simple index number is the ratio of the base period divided by the period of interest, multiplied by 100.

Ans: False

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

Multiple Choice

30. A time series with forecast values and error terms is presented in the following table. The mean error (ME) for this forecast is ___________.

Month

Actual

Forecast

Error

July

5

Aug

11

5

6.00

Sept

13

6.8

6.20

Oct

6

8.66

-2.66

Nov

5

7.862

-2.86

a) 1.67

b) 1.34

c) 6.68

d) 3.67

e) 2.87

Response: See section 15.1 Introduction to Forecasting

Difficulty: Easy

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

31. A time series with forecast values and error terms is presented in the following table. The mean absolute deviation (MAD) for this forecast is ___________.

Month

Actual

Forecast

Error

July

5

Aug

11

5

6.00

Sept

13

6.8

6.20

Oct

6

8.66

-2.66

Nov

5

7.862

-2.86

a) 3.54

b) 7.41

c) 4.43

d) 17.72

e) 4.34

Response: See section 15.1 Introduction to Forecasting

Difficulty: Easy

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

32. A time series with forecast values and error terms is presented in the following table. The mean squared error (MSE) for this forecast is ___________.

Month

Actual

Forecast

Error

July

5

Aug

11

5

6.00

Sept

13

6.8

6.20

Oct

6

8.66

-2.66

Nov

5

7.862

-2.86

a) 13.33

b) 17.94

c) 89.71

d) 22.43

e) 32.34

Response: See section 15.1 Introduction to Forecasting

Difficulty: Medium

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

33. A time series analysis was performed to determine the number of new online customers that joined the ‘Jelly of the Month Club’. The actual number of new customers, the forecast values and the error terms are presented in the following table. The mean error (ME) for this forecast is ___________.

Month

Actual

Forecast

Error

July

4

Aug

6

5

-1

Sept

3

6

3

Oct

9

8

-1

Nov

8

9

1

a) -0.50

b) 0.50

c) 1.50

d) 7.00

e) 3.00

Response: See section 15.1 Introduction to Forecasting

Difficulty: Easy

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

34. A time series analysis was performed to determine the number of new online customers that joined the ‘Jelly of the Month Club’. The actual number of new customers, the forecast values and the error terms are presented in the following table. The mean absolute deviation (MAD) for this forecast is ___________.

Month

Actual

Forecast

Error

July

4

Aug

6

5

-1

Sept

3

6

3

Oct

9

8

-1

Nov

8

9

1

a) -0.50

b) 0.50

c) 1.50

d) 7.00

e) 3.00

Response: See section 15.1 Introduction to Forecasting

Difficulty: Medium

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

35. A time series analysis was performed to determine the number of new online customers that joined the ‘Jelly of the Month Club’. The actual number of new customers, the forecast values and the error terms are presented in the following table. The mean squared error (MSE) for this forecast is ___________.

Month

Actual

Forecast

Error

July

4

Aug

6

5

-1

Sept

3

6

3

Oct

9

8

-1

Nov

8

9

1

a) -0.50

b) 0.50

c) 1.50

d) 7.00

e) 3.00

Response: See section 15.1 Introduction to Forecasting

Difficulty: Medium

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

36. In exponential smoothing models, the value of the smoothing constant may be any number between ___________.

a) -1 and 1

b) -5 and 5

c) 0 and 1

d) 0 and 10

e) 0 and 100

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

37. Use of a smoothing constant value greater than 0.5 in an exponential smoothing model gives more weight to ___________.

a) the actual value for the current period

b) the actual value for the previous period

c) the forecast for the current period

d) the forecast for the previous period

e) the forecast for the next period

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

38. Use of a smoothing constant value less than 0.5 in an exponential smoothing model gives more weight to ___________.

a) the actual value for the current period

b) the actual value for the previous period

c) the forecast for the current period

d) the forecast for the previous period

e) the forecast for the next period

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

39. When forecasting with exponential smoothing, data from previous periods is _________.

a) given equal importance

b) given exponentially increasing importance

c) ignored

d) given exponentially decreasing importance

e) linearly decreasing importance

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

40. Using a three-month moving average, the forecast value for October made at the end of September in the following time series would be ____________.

July

5

Aug

11

Sept

13

Oct

6

a) 11.60

b) 10.00

c) 9.07

d) 8.06

e) 9.67

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

41. Using a three-month moving average, the forecast value for November in the following time series is ____________.

July

5

Aug

11

Sept

13

Oct

6

a) 11.60

b) 10.00

c) 9.67

d) 8.60

e) 6.00

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

42. Using a three-month moving average (with weights of 6, 3, and 1 for the most current value, next most current value and oldest value, respectively), the forecast value for October made at the end of September in the following time series would be__________.

July

5

Aug

11

Sept

13

Oct

6

a) 11.60

b) 10.00

c) 9.67

d) 8.60

e) 6.11

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

43. Using a three-month moving average (with weights of 6, 3, and 1 for the most current value, next most current value and oldest value, respectively), the forecast value for November in the following time series is ____________.

July

5

Aug

11

Sept

13

Oct

6

a) 11.60

b) 10.00

c) 9.67

d) 8.06

e) 8.60

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

44. The city golf course is interested in starting a junior golf program. The golf pro has collected data on the number of youths under 13 that have played golf during the last 4 months. Using a three-month moving average, the forecast value for October made at the end of September in the following time series would be ____________.

July

28

Aug

27

Sept

17

Oct

19

a) 24

b) 21

c) 21.56

d) 19.22

e) 22

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

45. The city golf course is interested in starting a junior golf program. The golf pro has collected data on the number of youths under 13 that have played golf during the last 4 months. Using a three-month moving average, the forecast value for November in the following time series would be ____________.

July

28

Aug

27

Sept

17

Oct

19

a) 24

b) 21

c) 21.56

d) 19.22

e) 22

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

46. The city golf course is interested in starting a junior golf program. The golf pro has collected data on the number of youths under 13 that have played golf during the last 4 months. Using a three-month moving average (with weights of 5, 3, and 1 for the most current value, next most current value and oldest value, respectively), the forecast value for October made at the end of September in the following time series would be __________.

July

28

Aug

27

Sept

17

Oct

19

a) 24

b) 21

c) 21.56

d) 19.22

e) 22

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

47. The city golf course is interested in starting a junior golf program. The golf pro has collected data on the number of youths under 13 that have played golf during the last 4 months. Using a three-month moving average (with weights of 5, 3, and 1 for the most current value, next most current value and oldest value, respectively), the forecast value for November in the following time series would be ____________.

July

28

Aug

27

Sept

17

Oct

19

a) 24

b) 21

c) 21.56

d) 19.22

e) 22

Response: See section 15.2 Smoothing Techniques

Difficulty: Easy

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

48. What is the forecast for the Period 7 using a 3-period moving average technique, given the following time-series data for six past periods?

Period

1

2

3

4

5

6

Value

136

126

146

148

156

164

a) 164.67

b) 156.00

c) 148.00

d) 126.57

e) 158.67

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

49. The forecast value for August was 22 and the actual value turned out to be 19. Using exponential smoothing with = 0.30, the forecast value for September would be ______.

a) 21.1

b) 19.9

c) 18.1

d) 22.9

e) 21.0

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

50. The forecast value for September was 21.1 and the actual value turned out to be 18. Using exponential smoothing with = 0.30, the forecast value for October would be ______.

a) 18.09

b) 18.93

c) 20.17

d) 21.00

e) 17.07

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

51. The following graph of time-series data suggests a _______________ trend.

a) linear

b) quadratic

c) cosine

d) tangential

e) flat

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

52. The following graph of a time-series data suggests a _______________ trend.

a) linear

b) tangential

c) cosine

d) quadratic

e) flat

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

53. The following graph of a time-series data suggests a _______________ trend.

a) linear

b) quadratic

c) cosine

d) tangential

e) flat

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

54 The following graph of time-series data suggests a _______________ trend.

a) quadratic

b) cosine

c) linear

d) tangential

e) flat

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

55. Fitting a linear trend to 36 monthly data points (January 2011 = 1, February 2011 =2, March 2011 = 3, etc.) produced the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

222.379

67.35824

3.301438

0.002221

x

9.009066

3.17471

2.83776

0.00751

df

SS

MS

F

p-value

Regression

1

315319.3

315319.3

8.052885

0.007607

Residual

34

1331306

39156.07

Total

35

1646626

The projected trend value for January 2014 is ________.

a) 231.39

b) 555.71

c) 339.50

d) 447.76

e) 355.71

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

56. Fitting a linear trend to 36 monthly data points (January 2011 = 1, February 2011 =2, March 2011 = 3, etc.) produced the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

877.621

67.35824

13.02916

5.49E-15

x

-9.00907

3.17471

-2.83776

0.00751

df

SS

MS

F

p-value

Regression

1

315319.3

315319.3

8.052885

0.007607

Residual

34

1331306

39156.07

Total

35

1646626

The projected trend value for January 2014 is ________.

a) 544.29

b) 868.61

c) 652.39

d) 760.50

e) 876.90

Response: See section 15.3 Trend Analysis

Difficulty: Easy

Learning Objective: 15.3: Determine trend in time-series data by using linear regression trend analysis, quadratic model trend analysis, and Holt’s two-parameter exponential smoothing method.

57. Which of the following is not a component of time series data?

a) Trend

b) Seasonal fluctuations

c) Cyclical fluctuations

d) Normal fluctuations

e) Irregular fluctuations

Response: See section 15.4 Seasonal Effects

Difficulty: Easy

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

58. Calculating the "ratios of actuals to moving average" is a common step in time series decomposition. The results (the quotients) of this step estimate the ________.

a) trend and cyclical components

b) seasonal and irregular components

c) cyclical and irregular components

d) trend and seasonal components

e) irregular components

Response: See section 15.4 Seasonal Effects

Difficulty: Hard

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

59. The high and low values of the "ratios of actuals to moving average" are ignored when finalizing the seasonal index for a period (month or quarter) in time series decomposition. The rationale for this is to ________.

a) reduce the sample size

b) eliminate autocorrelation

c) minimize serial correlation

d) eliminate the irregular component

e) eliminate the trend

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

60. The ratios of "actuals to moving averages" (seasonal indexes) for a time series are presented in the following table as percentages.

2008

2009

2010

2011

2012

Q1

112.22

110.78

111.22

111.87

Q2

100.65

108.68

103.78

101.95

Q3

97.76

99.08

97.68

97.61

Q4

86.61

95.00

94.64

92.92

The final (completely adjusted) estimate of the seasonal index for Q1 is __________.

a) 109.733

b) 109.921

c) 113.853

d) 113.492

e) 111.545

Response: See section 15.4 Seasonal Effects

Difficulty: Easy

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

61. The ratios of "actuals to moving averages" (seasonal indexes) for a time series are presented in the following table as percentages.

2008

2009

2010

2011

2012

Q1

112.22

110.78

111.22

111.87

Q2

100.65

108.68

103.78

101.95

Q3

97.76

99.08

97.68

97.61

Q4

86.61

95.00

94.64

92.92

The final (completely adjusted) estimate of the seasonal index for Q4 is __________.

a) 86.61

b) 90.90

c) 93.78

d) 92.29

e) 93.00

Ans: c

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

62. Given several years of quarterly data and finding the four quarter moving average from Q3 of the second year through Q2 of the third year would be placed on the decomposition table between which two quarters?

a) second year Q3 and Q4

b) second year Q4 and third year Q2

c) third year Q1 and Q2

d) third year Q2 and Q3

e) second year Q4 and third year Q1

Ans: e

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

63. The effect of a four-quarter moving average on can be described as ______________ the seasonal effects of the data.

a) emphasizing

b) dampening

c) removing

d) incorporating

e) normalizing

Ans: b

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

64. A seasonal index for quarterly data is found as the ratio of ____________ to ___________ and is then multiplied by 100.

a) actuals; medians

b) moving average; 8

c) actuals; moving averages

d) actuals; 4

e) 100; actuals

Ans: c

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

65. If the seasonal index values for four consecutive quarters are 86.3, 105.6, 99.2, and 100, respectively, then which quarter has the most activity compared with the base quarter?

a) Q1

b) Q2

c) cannot be determined

d) Q3

e) Q4

Ans: b

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

Learning Objective: 15.4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

66. In an autoregressive forecasting model, the independent variable(s) is (are) ______.

a) time-lagged values of the dependent variable

b) first-order differences of the dependent variable

c) second-order, or higher, differences of the dependent variable

d) first-order quotients of the dependent variable

e) time-lagged values of the independent variable

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

67. Analysis of data for an autoregressive forecasting model produced the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

3.85094

3.745787

0.84426

0.34299

yt-1

0.70434

0.082849

-1.66023

0.13822

yt-2

-0.62669

0.035709

14.65044

6.69E-19

df

SS

MS

F

p-value

Regression

2

135753.5

67876.76

107.3336

1.91E-17

Residual

43

27192.79

632.3904

Total

45

162946.3

The forecasting model is __________.

a) ŷt = 3.745787 + 0.082849yt-1 + 0.035709yt-2

b) ŷt = 3.85094 + 0.70434yt-1 - 0.62669yt-2

c) ŷt = 0.84426 - 1.66023yt-1 + 14.65023yt-2

d) ŷt = 0.34299 + 0.13822yt-1 + 9.69yt-2

e) ŷt = 0.34299 + 0.13822yt-1 - 6.69yt-2

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

68. Analysis of data for an autoregressive forecasting model produced the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

3.85094

3.745787

0.84426

0.34299

yt-1

0.70434

0.082849

-1.66023

0.103822

yt-2

-0.62669

0.035709

14.65044

6.69E-19

df

SS

MS

F

p-value

Regression

2

135753.5

67876.76

107.3336

1.91E-17

Residual

43

27192.79

632.3904

Total

45

162946.3

The results indicate that __________.

a) the first predictor, yt-1, is significant at the 10% level

b) the second predictor, yt-2, is significant at the 1% level

c) all predictor variables are significant at the 5% level

d) none of the predictor variables are significant at the 5% level

e) the overall regression model is not significant at 5% level

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

69. Analysis of data for an autoregressive forecasting model produced the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

3.85094

3.745787

0.84426

0.34299

yt-1

0.70434

0.082849

-1.66023

0.103822

yt-2

-0.62669

0.035709

14.65044

6.69E-19

df

SS

MS

F

p-value

Regression

2

135753.5

67876.76

107.3336

1.91E-17

Residual

43

27192.79

632.3904

Total

45

162946.3

The actual values of this time series, y, were 228, 54, and 191 for May, June, and July, respectively. The forecast value predicted by the model for July is __________.

a) -101.00

b) 104.54

c) 218.71

d) 21.56

e) -77.81

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

70. Analysis of data for an autoregressive forecasting model produced the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

3.85094

3.745787

0.84426

0.34299

yt-1

0.70434

0.082849

-1.66023

0.103822

yt-2

-0.62669

0.035709

14.65044

6.69E-19

df

SS

MS

F

p-value

Regression

2

135753.5

67876.76

107.3336

1.91E-17

Residual

43

27192.79

632.3904

Total

45

162946.3

The actual values of this time series, y, were 228, 54, and 191 for May, June, and July, respectively. The predicted (forecast) value for August is __________.

a) -101.00

b) 104.54

c) 218.71

d) 21.56

e) -77.81

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

71. Jim Royo, Manager of Billings Building Supply (BBS), wants to develop a model to forecast BBS's monthly sales (in $1,000's). He selects the dollar value of residential building permits (in $10,000) as the predictor variable. An analysis of the data yielded the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

222.1456

74.765

2.971252

0.007284

x

6.152885

1.895423

3.24618

0.003866

df

SS

MS

F

p-value

Regression

1

259643.9

259643.9

10.53768

0.004046

Residual

20

492791.3

24639.56

Total

21

752435.2

Using = 0.05 the critical value of the Durbin-Watson statistic, dL, is _________.

a) 1.24

b) 1.22

c) 1.13

d) 1.15

e) 1.85

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

72. Jim Royo, Manager of Billings Building Supply (BBS), wants to develop a model to forecast BBS's monthly sales (in $1,000's). He selects the dollar value of residential building permits (in $10,000) as the predictor variable. An analysis of the data yielded the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

222.1456

74.765

2.971252

0.007284

x

6.152885

1.895423

3.24618

0.003866

df

SS

MS

F

p-value

Regression

1

259643.9

259643.9

10.53768

0.004046

Residual

20

492791.3

24639.56

Total

21

752435.2

Using = 0.05 the critical value of the Durbin-Watson statistic, dU, is _________.

a) 1.54

b) 1.42

c) 1.43

d) 1.44

e) 1.85

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

73. Jim Royo, Manager of Billings Building Supply (BBS), wants to develop a model to forecast BBS's monthly sales (in $1,000's). He selects the dollar value of residential building permits (in $10,000) as the predictor variable. An analysis of the data yielded the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

222.1456

74.765

2.971252

0.007284

x

6.152885

1.895423

3.24618

0.003866

df

SS

MS

F

p-value

Regression

1

259643.9

259643.9

10.53768

0.004046

Residual

20

492791.3

24639.56

Total

21

752435.2

Jim's calculated value for the Durbin-Watson statistic is 1.93. Using = 0.05, the appropriate decision is: _________.

a) do not reject H0: = 0

b) reject H0: 0

c) do not reject: ≠ 0

d) the test is inconclusive

e) reject H0: = 0

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

74. Jim Royo, Manager of Billings Building Supply (BBS), wants to develop a model to forecast BBS's monthly sales (in $1,000's). He selects the dollar value of residential building permits (in $10,000) as the predictor variable. An analysis of the data yielded the following tables.

Coefficients

Standard Error

t Statistic

p-value

Intercept

222.1456

74.765

2.971252

0.007284

x

6.152885

1.895423

3.24618

0.003866

df

SS

MS

F

p-value

Regression

1

259643.9

259643.9

10.53768

0.004046

Residual

20

492791.3

24639.56

Total

21

752435.2

Jim's calculated value for the Durbin-Watson statistic is 1.14. Using = 0.05, the appropriate decision is: _________.

a) do not reject H0: = 0

b) reject H0: = 0

c) do not reject H0: ≠ 0

d) the test is inconclusive

e) reject H0: ≠ 0

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

75. The motivation for using an index number is to ________________.

a) transform the data to a standard normal distribution

b) transform the data for a linear model

c) eliminate bias from the sample

d) reduce data to an easier-to-use, more convenient form

e) reduce the variance in the data

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

76. Often, index numbers are expressed as ____________.

a) percentages

b) frequencies

c) cycles

d) regression coefficients

e) correlation coefficients

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

77. Index numbers facilitate comparison of ____________.

a) means

b) data over time

c) variances

d) samples

e) deviations

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

78. Typically, the denominator used to calculate an index number is a measurement for the ____________ period.

a) base

b) current

c) spanning

d) intermediate

e) peak

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

79. Weighted aggregate price indexes are also known as _______.

a) unbalanced indexes

b) balanced indexes

c) value indexes

d) multiplicative indexes

e) overall indexes

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

80. When constructing a weighted aggregate price index, the weights usually are _____.

a) prices of substitute items

b) prices of complementary items

c) quantities of the respective items

d) squared quantities of the respective items

e) quality of individual items

Response: See section 15.6 Index Numbers

Difficulty: Easy

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

81. Using 2010 as the base year, the 2012 value of a simple price index for the following price data is _____________.

Year

2008

2009

2010

2011

2012

2013

Price

29.88

32.69

42.04

46.18

47.98

48.32

a) 77.60

b) 114.13

c) 160.58

d) 99.30

e) 100.00

Response: See section 15.6 Index Numbers

Difficulty: Medium

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

82. Using 2000 as the base year, the 1990 value of the Paasche Price Index is ______. (Quantities are averages for the student body.)

Year

1990

2000

2010

Price

Quantity

Price

Quantity

Price

Quantity

Tuition ($/3 hrs)

81

15.00

164

13.00

200

12.00

Books ($ each)

32

1.00

43

1.50

85

2.00

Calculator ($ each)

45

0.50

27

0.75

25

1.00

a) 80.72

b) 162.28

c) 240.06

d) 50.45

e) 30.35

Response: See section 15.6 Index Numbers

Difficulty: Medium

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

83. Using 2011 as the base year, the 2010 value of the Laspeyres Price Index is ______.

Year

2010

2011

2012

Price

Quantity

Price

Quantity

Price

Quantity

Wheat (¢'s/bushel)

370

100

372

110

255

120

Sugar (¢'s/pound)

13

50

12

40

12

30

Lard (¢'s/pound)

14

70

13

60

13

40

a) 69.92

b) 144.06

c) 100.21

d) 79.72

e) 99.72

Response: See section 15.6 Index Numbers

Difficulty: Medium

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

84. Using 2011 as the base year, the 2010 value of the Paasche' Price Index is ______.

Year

2010

2011

2012

Price

Quantity

Price

Quantity

Price

Quantity

Wheat (¢'s/bushel)

370

100

372

110

255

120

Sugar (¢'s/pound)

13

50

12

40

12

30

Lard (¢'s/pound)

14

70

13

60

13

40

a) 99.79

b) 192.51

c) 100.29

d) 59.19

e) 39.99

Response: See section 15.6 Index Numbers

Difficulty: Medium

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

85. A weighted aggregate price index where the weight for each item is computed by using the quantities of the base period is known as the

a. Paasche Index

b. Simple Index

c. Laspeyres Index

d. Consumer Price index

e. Producer Price index

Response: See section 15.6 Index Numbers

Difficulty: Medium

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

86. A weighted aggregate price index where the weight for each item is computed by using the quantities of the year of interest is known as the

a. Paasche Index

b. Simple Index

c. Laspeyres Index

d. Consumer Price index

e. Producer Price index

Response: See section 15.6 Index Numbers

Difficulty: Medium

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

87. A time series with forecast values is presented in the following table:

Month

Actual

Forecast

Jan

a

Mar

1.2a

1.1a

May

1.15a

1.2 a

Jul

1.25 a

1.21a

Sep

1.3a

1.25a

On this table, a is some nondisclosed value. The mean square error (MSE) is ______% of a.

a) 4.15

b) 0.415

c) 4.15a

d) 0.415a

e) 0.00332

Response: See section 15.1 Introduction to Forecasting

Difficulty: Hard

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

88. A time series with forecast values is presented in the following table:

Month

Actual

Forecast

Jan

a

Mar

1.2a

1.1a

May

1.15a

1.2 a

Jul

1.25 a

1.21a

Sep

1.3a

1.25a

On this table, a is some nondisclosed value. The mean absolute deviation (MAD)

is ______% of a.

a) 6a

b) 0.06a

c) 6

d) 0.06

e) 4.8

Response: See section 15.1 Introduction to Forecasting

Difficulty: Hard

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

89. A time series with forecast values is presented in the following table:

Month

Actual

Forecast

Jan

a

Mar

1.2a

1.1a

May

1.15a

1.2 a

Jul

1.25 a

1.21a

Sep

1.3a

1.25a

If the mean absolute deviation (MAD) is 257, then a = ______.

a) 4283.33

b) 428.33

c) 15.42

d) 42.833

e) 1.542

Response: See section 15.1 Introduction to Forecasting

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

90. A time series with forecast values is presented in the following table:

Month

Actual

Forecast

Jan

1

Mar

1.2

*

May

1.15

*

Jul

1.25

*

Sep

1.3

*

Nov

1.275

x

If the mean absolute deviation (MAD) until September is 0.06, and the overall MAD is 0.05,

then x = ______.

a) 1.270

b) 1.270 or 1.280

c) 1.265 or 1.285

d) 1.260 or 1.290

e) 1.285

Response: See section 15.1 Introduction to Forecasting

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

91. A time series with forecast values is presented in the following table:

Month

Actual

Forecast

Jan

1

Mar

1.2

*

May

1.15

*

Jul

1.25

*

Sep

1.3

*

Nov

1.275

x

If the mean square error (MSE) until September is 0.01125, and the overall MSE is 0.010125,

then x = ______.

a) 1.15

b) 1.25

c) 1.3

d) 1.25 or 1.3

e) 1.2 or 1.35

Response: See section 15.1 Introduction to Forecasting

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.1: Differentiate among various measurements of forecasting error, including mean absolute deviation and mean square error, in order to assess which forecasting method to use.

92. The forecast value for July was 210 and the actual value turned out to be 195. The researcher is using exponential smoothing and determines that the forecast value for August is 206.25. Then he is using α = ______.

a) 0.35

b) 0.32

c) 0.30

d) 0.25

e) 0.22

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

93. The actual value of a variable for July was 195. The researcher is using exponential smoothing with α = 0.30 and determines that the forecast value for August is 205.5. Then the forecast value for July was ______.

a) 200

b) 202

c) 205

d) 207

e) 210

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

94. If a researcher is using exponential smoothing and determines that the forecast for the next period (Ft + 1) is the average of the actual value for the previous period (Xt) and the forecast value for the previous period (Ft), then α = ______.

a) 0.35

b) 0.40

c) 0.50

d) 0.55

e) there is not enough information to determine the value of α

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

95. If a researcher is using exponential smoothing and determines that the forecast for the next period (Ft + 1) is the weighted average of the actual value for the previous period (Xt) and the forecast value for the previous period (Ft), with weights of 1 and 3 respectively, then α = ______.

a) 0.25

b) 0.33

c) 0.67

d) 0.75

e) 0.90

Response: See section 15.2 Smoothing Techniques

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

96. If a researcher is using exponential smoothing and determines that the forecast for the next period (Ft + 1) is the weighted average of the actual value for the previous period (Xt) and the forecast value for the previous period (Ft), with weights of p and q respectively, then α = ______.

a) p/q

b) q/p

c) 1 − q/(p + q)

d) 1 − p/(p + q)

e) 1/(p + q)

Response: See section 15.2 Smoothing Techniques

Difficulty: Hard

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

97. If a researcher is using exponential smoothing and determines that the forecast for the next period (Ft + 1) coincides with the weighted average of the actual value for the previous period (Xt) and the forecast value for the previous period (Ft), with weights of p and q respectively. If p = 2, then q = ______.

a) 2/α + 2

b) 2/(α + 2)

c) 2/(α − 2)

d) 1/(α − 2)

e) 2/α − 2

Response: See section 15.2 Smoothing Techniques

Difficulty: Hard

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.2: Describe smoothing techniques for forecasting models, including naïve, simple average, moving average, weighted moving average, and exponential smoothing.

98. If the Yeart Quarterq actual value is 9,885 and the corresponding Yeart Quarterq seasonal index is 97.75, then the Yeart Quarterq deseasonalized value is ______.

a) 222.41

b) 9,662.59

c) 9,775.00

d) 10,083.18

e) 10,112.53

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15. 4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

99. If the Yeart Quarterq actual value is 9,885 and the Yeart Quarterq deseasonalized value is 10,112.53, then the Yeart Quarterq seasonal index is ______.

a) 0.022

b) 0.9775

c) 2.22

d) 97.75

e) 102.30

Response: See section 15.4 Seasonal Effects

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15. 4: Account for seasonal effects of time-series data by using decomposition and Winters’ three-parameter exponential smoothing method.

100. Suppose that for a time-series model with one predictor, you compute a Durbin-Watson statistic of 0.625. Assume that n = 30 and α = 0.05. Then dL = ______.

a) 1.32

b) 1.35

c) 1.38

d) 1.41

e) 1.43

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Easy

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

101. Suppose that for a time-series model with one predictor, you compute a Durbin-Watson statistic D = 0.625. Assume that n = 30 and α = 0.05. Then your decision is ______.

a) fail to reject the null hypothesis D = 0

b) reject the null hypothesis D = 0

c) fail to reject the null hypothesis ρ = 0

d) reject the null hypothesis ρ = 0

e) fail to reject the null hypothesis ρ > 0

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

102. Suppose that for a time-series model with one predictor, you compute a Durbin-Watson statistic D = 1.409. Assume that n = 30 and α = 0.01. Then your decision is to ______.

a) fail to reject the null hypothesis D = 0

b) reject the null hypothesis D = 0

c) fail to reject the null hypothesis ρ = 0

d) reject the null hypothesis ρ = 0

e) fail to reject the null hypothesis ρ > 0

Response: See section 15.5 Autocorrelation and Autoregression

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.5: Test for autocorrelation using the Durbin-Watson test, overcoming autocorrelation by adding independent variables and transforming variables, and taking advantage of autocorrelation with autoregression.

103. The table below shows the prices in $ and quantities (thousands) for five specialized electronic components for 2000 and 2016.

 

P_2000

Q_2000

P_2016

Q_2016

Ring A

1.58

35

2.16

37

Ring B

2.25

48

3.25

46

Capacitor

0.36

52

0.81

50

Sigma unit

1.27

48

1.59

52

CPU

4.15

28

5.5

28

The Paasche price index for 2016 using 2000 as base year is ______.

a) 139.87

b) 137.25

c) 140.33

d) 133.25

e) 131.87

Response: See section 15.6 Index Numbers

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

104. The table below shows the prices in $ and quantities (thousands) for five specialized electronic components for 2000 and 2016.

 

P_2000

Q_2000

P_2016

Q_2016

Ring A

1.58

35

2.16

37

Ring B

2.25

48

3.25

46

Capacitor

0.36

52

0.81

50

Sigma unit

1.27

48

1.59

52

CPU

4.15

28

5.5

28

The Laspeyres price index for 2016 using 2000 as base year is ______.

a) 136.25

b) 137.33

c) 138.75

d) 139.87

e) 140.33

Response: See section 15.6 Index Numbers

Difficulty: Medium

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

105. The table below shows the prices in $ and quantities (thousands) for five specialized electronic components for 2000 and 2016.

 

P_2000

Q_2000

P_2016

Q_2016

Ring A

1.58

35

2.16

37

Ring B

2.25

48

3.25

46

Capacitor

0.36

52

0.81

50

Sigma unit

1.27

48

1.59

52

CPU

4.15

28

P

28

If the Paasche price index for 2016 using 2000 as base year is 137.75, then P = ______.

a) 4.75

b) 4.88

c) 5.23

d) 5.67

e) 5.72

Response: See section 15.6 Index Numbers

Difficulty: Hard

AACSB: Reflective thinking

Bloom’s level: Application

Learning Objective: 15.6: Differentiate among simple index numbers, unweighted aggregate price index numbers, weighted aggregate price index numbers, Laspeyres price index numbers, and Paasche price index numbers by defining and calculating each.

Document Information

Document Type:
DOCX
Chapter Number:
15
Created Date:
Aug 21, 2025
Chapter Name:
Chapter 15 Time-Series Forecasting And Index Numbers
Author:
Ken Black

Connected Book

Business Stats Contemporary Decision 10e | Test Bank by Ken Black

By Ken Black

Test Bank General
View Product →

$24.99

100% satisfaction guarantee

Buy Full Test Bank

Benefits

Immediately available after payment
Answers are available after payment
ZIP file includes all related files
Files are in Word format (DOCX)
Check the description to see the contents of each ZIP file
We do not share your information with any third party