Ch.7 Test Bank Standardized Measurement and Assessment - Educational Research 6e Answer Key + Test Bank by Robert Burke Johnson. DOCX document preview.

Ch.7 Test Bank Standardized Measurement and Assessment

Chapter 7: Standardized Measurement and Assessment

Learning Objective:

  1. Explain the meaning of measurement.
  2. Explain the different scales of measurement, including the type of information communicated by each one.
  3. Articulate the seven assumptions underlying testing and assessment.
  4. Explain the meaning of reliability.
  5. Explain the characteristics of each of the methods for computing reliability.
  6. Explain the meaning of validity and validity evidence.
  7. Explain the different methods of collecting validity evidence.
  8. Identify the different types of standardized tests and the sources of information on these tests

Multiple Choice

1. _____ is the process of assigning symbols or numbers to objects, events, people, or characteristics according to a specific set of rules.

a. Assessment

b. Evaluation

c. Measurement

d. Observation

Learning Objective: 1

Cognitive Domain: Knowledge

Answer Location: Defining Measurement

Difficulty Level: Easy

2. Which is the correct order of the scales of measurement?

a. Nominal, ordinal, interval, ratio

b. Ordinal, interval, ratio, nominal

c. Interval, ordinal, ratio, nominal

d. Ratio, nominal, ordinal, interval

Learning Objective: 2

Cognitive Domain: Comprehension

Answer Location: Scales of Measurement

Difficulty Level: Medium

3. Which one of the following is measured on a nominal scale?

a. Scores on an IQ test.

b. Class rank in high school

c. Students’ gender

d. Number of items correct on a spelling test

Learning Objective: 2

Cognitive Domain: Application

Answer Location: Nominal Scale

Difficulty Level: Medium

4. Which one of the following variables is measured on an ordinal scale?

a. Class rank in high school

b. Racial or ethnic group

c. State of residence

d. Personality test score

Learning Objective: 2

Cognitive Domain: Application

Answer Location: Ordinal Scale

Difficulty Level: Medium

5. Which one of the following variables is measured on an interval scale?

a. Paul’s place in line

b. Gender of participants

c. Foreign language studied in high school

d. Score on an IQ test

Learning Objective: 2

Cognitive Domain: Application

Answer Location: Interval Scale

Difficulty Level: Medium

6. Which of the following variables is measured on a ratio scale?

a. Mia’s score on a personality test

b. Class valedictorian and salutatorian

c. Bill’s score on an achievement test

d. The time it took Betsy to finish an assignment, measured in seconds

Learning Objective: 2

Cognitive Domain: Application

Answer Location: Ratio Scale

Difficulty Level: Medium

7. An advantage of ratio and interval scales of measurement is that:

a. More powerful mathematics can be done on them

b. Most variables used in educational research have ratio properties

c. They are used to make qualitative distinctions only

d. There are no mathematical advantages to ratio scales

Learning Objective: 2

Cognitive Domain: Application

Answer Location: Interval Scale, Ratio Scale

Difficulty Level: Medium

8. Which of the following is true about testing and assessment?

a. Testing and assessment is usually error-free.

b. Test-related attitudes and behavior cannot be used to predict non-test-related attitudes and behavior.

c. There is only one source of error.

d. Traits and states can be quantified and measured.

Learning Objective: 3

Cognitive Domain: Comprehension

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Medium

9. There are many assumptions underlying testing and measurement. Which of the following is one of them?

a. A major decision should always be made on the basis of a single test score

b. Multiple sources of error are always present in the measurement process

c. Fair and unbiased tests cannot be developed

d. Standardized testing and assessment do not benefit society

Learning Objective: 3

Cognitive Domain: Comprehension

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Hard

10. The reliability of a test refers to:

a. The consistency or stability of the test scores.

b. Whether a test measures what it is supposed to measure.

c. Whether a test is valid.

d. Its content sampling.

Learning Objective: 4

Cognitive Domain: Knowledge

Answer Location: Reliability

Difficulty Level: Easy

11. The consistency of a set of test scores over time is known as:

a. Equivalent forms reliability

b. Split-half reliability

c. Test-retest reliability

d. Interscorer reliability

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Test-Retest Reliability]

Difficulty Level: Easy

12. The accuracy of inferences or interpretations made from test scores is known as:

a. Reliability

b. Validity

c. Stability

d. Consistency

Learning Objective: 6

Cognitive Domain: knowledge

Answer Location: Validity

Difficulty Level: Easy

13. When choosing a test or assessment procedure, you should choose one that is:

a. Reliable only

b. Valid only

c. Neither reliable nor valid

d. Both reliable and valid

Learning Objective: 3

Cognitive Domain: Comprehension

Answer Location: Overview of Reliability and Validity

Difficulty Level: Medium

14. Error that is present every time an instrument is used is known as:

a. Assessment error

b. Testing error

c. Systematic error

d. Unregulated error

Learning Objective: 6

Cognitive Domain: Knowledge

Answer Location: Overview of Reliability and Validity

Difficulty Level: Easy

15. _____ refers to the consistency or stability of test scores.

a. Reliability

b. Validity

c. Convergent evidence

d. Interpolation

Learning Objective: 4

Cognitive Domain: Knowledge

Answer Location: Reliability

Difficulty Level: Easy

16. Systematic (i.e., non-random) error is associated with:

a. Reliability problems

b. Validity problems

c. Homogeneous test items

d. None of the above

Learning Objective: 6

Cognitive Domain: Comprehension

Answer Location: Overview of Reliability and Validity

Difficulty Level: Medium

17. A test that is internally consistent is also said to be:

a. Homogeneous

b. Heterogeneous

c. Mutidimensional

d. Valid

Learning Objective: 5

Cognitive Domain: Comprehension

Answer Location: Internal Consistency Reliability

Difficulty Level: Medium

18. Coefficient alpha is an index of which of the following?

a. Test-retest reliability

b. Interscorer reliability

c. Equivalent-forms reliability

d. Internal consistency reliability

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Internal Consistency Reliability

Difficulty Level: Easy

19. Test-retest reliability refers to:

a. Whether test items are answered consistently within a test

b. Whether scores on the first half of a test correlate with scores on the second half

c. Whether scores on a test given at one point in time correlate with scores on the same test at a second testing

d. Whether experts agree that a test measures what it is supposed to measure

Learning Objective: 5

Cognitive Domain: knowledge

Answer Location: Test-retest Reliability

Difficulty Level: easy

20. Equivalent-forms reliability refers to:

a. The consistency of a group of individuals’ scores on two different forms of a test measuring the same thing

b. Whether scores on the first half of a test correlate with scores on the second half

c. Whether experts agree that a test measures what it is supposed to measure

d. Whether a test is predictive of behavior

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Equivalent-Forms Reliability

Difficulty Level: Easy

21. The Spearman-Brown formula is used to:

a. Correct validity coefficients

b. Correct test-retest reliability coefficients

c. Correct equivalent forms reliability coefficients

d. Correct the split-half reliability coefficient

Learning Objective: 5

Cognitive Domain: knowledge

Answer Location: Internal Consistency Reliability

Difficulty Level: Easy

22. A general rule of thumb states that coefficient alpha should be:

a. Greater than or equal to .05

b. Greater than or equal to .70:

c. Less than .70

d. Less than .05

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Internal Consistency Reliability

Difficulty Level: Comprehension

23. As the number of items on a test increases, coefficient alpha tends to _____.

a. Increase

b. Decrease

c. Stay the same

d. No longer be important.

Learning Objective: 5

Cognitive Domain: comprehension

Answer Location: Internal Consistency Reliability

Difficulty Level: Medium

24. Jane needs to calculate interscorer reliability for her study. She will do this by:

a. Correlating the judgments of two raters

b. Calculating coefficient alpha

c. Calculating a test-retest correlation

d. Calculating KR-20

Learning Objective: 5

Cognitive Domain: Application

Answer Location: Interscorer Reliability

Difficulty Level: Medium

25. Which of the following is a type of criterion-related validity evidence?

a. Concurrent evidence

b. Content evidence

c. Expert judgments

d. Factor analysis

Learning Objective: 7

Cognitive Domain: Comprehension

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Medium

26. Content-related validity evidence refers to:

a. Judgment that the test is reliable

b. Judgment that the test is internally consistent

c. Correlation between scores on the test and a criterion behavior

d. Judgment that the test adequately samples and represents the construct domain it purports to measure

Learning Objective: 7

Cognitive Domain: Knowledge

Answer Location: Evidence Based on Content

Difficulty Level: Medium

27. Criterion-related validity evidence refers to whether a test:

a. Adequately samples the content it purports to measure

b. Is predictive of a future or concurrent behavior

c. Is internally consistent

d. Meets the criterion for reliability

Learning Objective: 7

Cognitive Domain: Comprehension

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Medium

28. Concurrent validity evidence refers to:

a. The relationship between test scores and criterion scores obtained at the same time

b. Whether scores on a test correlated with scores on the same test given a week later

c. Whether a test predicts scores on some criterion measure (e.g., success in training)

d. Whether a test has an adequate coefficient alpha

Learning Objective: 7

Cognitive Domain: Comprehension

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Medium

29. A researcher has developed a test and is working on collecting psychometric data so it can be published. Currently she is collecting data about psychological and social consequences of using this test. She is doing this to:

a. Establish divergent validity

b. Collect what is known as known groups evidence

c. Establish consequential validity

d. Establish convergent validity

Learning Objective: 7

Cognitive Domain: Analysis

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Hard

30. Dr. Spock has developed a test to identify students who are gifted. He administered the test to students with very high IQs and students with very low IQs and compared the scores. He found that students with high IQs got very high scores on his giftedness test. These results show:

a. Concurrent evidence

b. Known groups evidence

c. Content-related evidence

d. Factor analytic evidence

Learning Objective: 7

Cognitive Domain: Analysis

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Hard

31. If a test measures a single construct then:

a. The items should correlate with the total score.

b. The items should not correlate with the total score.

c. The test should not correlate with other measures of the same construct.

d. There must be a reliable alternative form.

Learning Objective: 5

Cognitive Domain: Analysis

Answer Location: Internal Consistency Reliability

Difficulty Level: Hard

32. A test is homogeneous if:

a. The test measures a single construct

b. The test has high test-retest reliability

c. The items do not correlate with the total score

d. There is a reliable alternative form

Learning Objective: 5

Cognitive Domain: Comprehension

Answer Location: Internal Consistency Reliability

Difficulty Level: Hard

33. Which of the following is an example of convergent validity evidence?

a. The test has a high coefficient alpha

b. The test has a high test-retest correlation

c. The test correlates highly with other tests of the same construct

d. The test has a high KR-20 coefficient

Learning Objective: 7

Cognitive Domain: Application

Answer Location: Evidence Based on Content

Difficulty Level: Medium

34. Professor X develops a test of emotional intelligence. Which of the following represent convergent and discriminant evidence?

a. The test correlates highly with another test of emotional intelligence and is not correlated with self-efficacy

b. The test correlates highly with another test of emotional intelligence and is highly correlated with self-efficacy

c. The test does not correlate with another test of emotional intelligence, but does correlate with self-efficacy

d. The test correlates with neither other tests of emotional intelligence nor tests of self-efficacy

Learning Objective: 7

Cognitive Domain: Analysis

Answer Location: Evidence Based on Content

Difficulty Level: Medium

35. A test has a coefficient alpha of .75. This is evidence of:

a. Interscorer reliability

b. Alternate forms reliability

c. Internal consistency reliability

d. Test-retest reliability

Learning Objective: 5

Cognitive Domain: Comprehension

Answer Location: Internal consistency reliability

Difficulty Level: Medium

36. A group of supervisors agrees that 16 items measure issues important to a particular kind of work. This is evidence of:

a. Criterion-related validity evidence

b. Content-related validity evidence

c. Convergent validity evidence

d. Divergent validity evidence

Learning Objective: 7

Cognitive Domain: Application

Answer Location: Evidence Based on Content

Difficulty Level: Medium

37. A validity coefficient is a correlation between:

a. scores and a relevant criterion

b. scores on the same test given two weeks apart to the same person

c. the scores on even and odd items on a test

d. scores on two separate forms of the same test

Learning Objective: 7

Cognitive Domain: Comprehension

Answer Location: Evidence Based on Content

Difficulty Level: Medium

38. A test correlates .90 with a test of the same construct, but .10 with a test of a different construct. This is evidence of:

a. Criterion-related validity evidence

b. Content-related validity evidence

c. Internal consistency reliability

d. Convergent and discriminant validity

e. Alternate forms reliability

Learning Objective: 7

Cognitive Domain: Analysis

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Medium

39. Factor analysis is used to determine:

a. The number of underlying dimensions or factors measured by a test

b. Test-retest reliability

c. Interrater reliability

d. Alternate forms reliability

Learning Objective: 7

Cognitive Domain: Knowledge

Answer Location: Evidence Based on Internal Structure

Difficulty Level: Easy

40. An employment test correlates .90 with work performance after one year. This is a type of:

a. Criterion-related validity evidence

b. Content-related validity evidence

c. Internal consistency reliability

d. Convergent/discriminant validity

Learning Objective: 7

Cognitive Domain: Application

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Medium

41. Intelligence tests typically measure:

a. The ability to think abstractly and learn from experience

b. State variables

c. Learning styles

d. School achievement ability

Learning Objective: 8

Cognitive Domain: Knowledge

Answer Location: Intelligence Tests

Difficulty Level: Easy

42. The difference between achievement and aptitude tests is that:

a. Achievement tests measure more the results of formal learning.

b. Aptitude test scores are unrelated to school performance.

c. Achievement test scores are influenced mostly be informal learning.

d. Achievement test scores do not predict school performance.

Learning Objective: 8

Cognitive Domain: Knowledge

Answer Location: Achievement Tests, Aptitude Tests

Difficulty Level: Medium

43. A trait refers to:

a. How one acts when alone

b. An individual’s state in one particular situation

c. How someone feels during times of stress

d. Enduring characteristics of people.

Learning Objective: 3

Cognitive Domain: Knowledge

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: East

44. A scale of measurement that uses symbols, such as words or numbers to label, classify, or identify people or objects is which of the following?

a. Nominal scale

b. Ordinal scale

c. Interval scale

d. Ratio scale

Learning Objective: 2

Cognitive Domain: knowledge

Answer Location: Nominal Scale

Difficulty Level: Easy

45. What is the name of the scale of measurement that has a true zero point as well as the characteristics of the rank ordering and equal distances between scale points?

a. Nominal scale

b. Ordinal scale

c. Interval scale

d. Ratio scale

Learning Objective: 2

Cognitive Domain: Knowledge

Answer Location: Ratio Scale

Difficulty Level: Easy

46. A scale of measurement that has equal distances between adjacent scale points but does not have a true zero point is known as:

a. Nominal scale

b. Ordinal scale

c. Interval scale

d. Ratio scale

Learning Objective: 2

Cognitive Domain: Knowledge

Answer Location: Interval Scale

Difficulty Level: Easy

47. A rank-order scale of measurement is:

a. Nominal scale

b. Ordinal scale

c. Interval scale

d. Ratio scale

Learning Objective: 2

Cognitive Domain: Knowledge

Answer Location: Ordinal Scale

Difficulty Level: Easy

48. The gathering and integrating of data to make educational evaluations is specifically called:

a. Measurement

b. Testing

c. Assessment

d. Survey

Learning Objective: 1

Cognitive Domain: Knowledge

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Easy

49. Distinguishable, relatively enduring ways in which one individual differs from another are which of the following?

a. Reliability

b. Traits

c. States

d. Systematic Error

Learning Objective: 3

Cognitive Domain: Knowledge

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Easy

50. Ms. Green and Mr. Potter conduct behavioral observations for their school. When comparing results on the same children, Ms. Green’s ratings are always more negative than Mr. Potter’s. This is an example of:

a. Random Error

b. Traits

c. States

d. Systematic Error

Learning Objective: 6

Cognitive Domain: Analysis

Answer Location: Overview of Reliability and Validity

Difficulty Level: Hard

51. The specific group for which the test publisher or researcher provides evidence for test validity and reliability is known as the _____.

a. Test group

b. Benchmark group

c. Norming Group

d. Comparison Group

Learning Objective: 8

Cognitive Domain: Comprehension

Answer Location: Using Reliability and Validity Information in Your Research

Difficulty Level: Easy

52. The consistency with which the items on a test measure a single construct is called:

a. Reliability Coefficient

b. Test-Retest Reliability

c. Equivalent Forms Reliability

d. Internal Consistency

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Internal Consistency Reliability

Difficulty Level: Easy

53. Consistency of scores on a test over time is known as:

a. Reliability Coefficient

b. Test-Retest Reliability

c. Equivalent Forms Reliability

d. Internal Consistency

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Test-Retest Reliability

Difficulty Level: Easy

54. Validity evidence based on the extent to which scores from a test can be used to predict or infer performance on some criterion such as a test or future performance is which of the following?

a Test-retest evidence.

b. Concurrent evidence

c. Criterion-related evidence

d. Alternative forms evidence

Learning Objective: 7

Cognitive Domain: Knowledge

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Easy

55. Validity evidence based on the relationship between test scores collected at one point in time and criterion scores obtained at a later time is known as:

a. Norm-Related Evidence

b. Concurrent Evidence

c. Predictive Evidence

d. Discriminant Evidence

Learning Objective: 7

Cognitive Domain: Knowledge

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Easy

56. Evidence that the scores on your focal test are not highly related to the scores from other tests that are designed to measure theoretically different constructs is known as:

a. Criterion-Related Evidence

b. Concurrent Evidence

c. Convergent Evidence

d. Discriminant Evidence

Learning Objective: 7

Cognitive Domain: Knowledge

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Easy

57. Validity evidence based on the relationship between the focal test scores and independent measures of the same construct is known as:

a. Criterion-Related Evidence

b. Predictive Evidence

c. Convergent Evidence

d. Discriminant Evidence

Learning Objective: 7

Cognitive Domain: Knowledge

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Easy

58. Validity evidence based on the relationship between test scores and criterion scores obtained at the same time is known as:

a. Content-Related Evidence

b. Concurrent Evidence

c. Predictive Evidence

d. Alternate Forms Evidence

Learning Objective: 7

Cognitive Domain: Knowledge

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Easy

59. The relatively permanent patterns that characterize and can be used to classify individuals is called:

a. Intelligence

b. Personality

c. Homogeneity

d. IQ

Learning Objective: 8

Cognitive Domain: Knowledge

Answer Location: Personality Tests

Difficulty Level: Easy

60. In order to improve counseling services, a researcher wants to understand what affects high school students’ beliefs and feelings immediately after they have received low or high test scores. This researcher is planning to examine what?

a. Traits

b. Reliability

c. States

d. Validity

Learning Objective: 3

Cognitive Domain: Analysis

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Medium

61. A researcher wants to understand what helps some students from disadvantaged backgrounds to excel throughout their school years while others do not. This researcher is planning to examine what?

a. Traits

b. Reliability

c. States

d. Validity

Learning Objective: 3

Cognitive Domain: Analysis

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Medium

62. When constructing an instrument to measure attitudes toward exercise, a researcher had a panel of experts examine the items generated to make sure the items adequately represented the domain of interest. This would be an example of:

a. Reliability analysis

b. Factory Analysis

c. Content-related evidence validity

d. Validity coefficient

Learning Objective: 7

Cognitive Domain: Application

Answer Location: Evidence Based on Content

Difficulty Level: Medium

63. If a research project was designed to determine if a questionnaire given to newly married couples can determine the likelihood that they will eventually divorce, this would be an example of what type of evidence?

a. Concurrent

b. Content

c. Discriminant

d. Predictive

Learning Objective: 7

Cognitive Domain: Application

Answer Location: Evidence Based on Relations to Other Variables

Difficulty Level: Medium

64. If a researcher collects data using an instrument from a group of participants who vary greatly from the instrument’s norming group, what does this mean?

a. The study is only moderately valid

b. You should downgrade your evaluation of the quality of the research study because the measurement was of highly questionable validity

c. You can assume that the instrument still worked well

d. The study had a high interscorer reliability correlation

Learning Objective: 8

Cognitive Domain: Application

Answer Location: Using Reliability and Validity Information for Your Research

Difficulty Level: Medium

65. A research team is developing a test of school readiness. They believe that there are four dimensions of school readiness. They completed a factor analysis on their data. If the test was developed well, how many factors should have been evident in the factor analysis?

a. One

b. Three

c. Four

d. There is not enough information to tell.

Learning Objective: 8

Cognitive Domain: Analysis

Answer Location: Evidence Based on Internal Structure

Difficulty Level: Hard

66. For one of your tests in this class, the instructor is going to give you a calculus test. Which of the following outcomes of the use of this test in your class focused on educational research?

a. The test is a reliable and valid test for this class

b. The test is reliable but not valid for this class

c. The test is not reliable but valid for this class

d. The test is neither reliable nor valid

Learning Objective: 4, 6

Cognitive Domain: Analysis

Answer Location: Overview of Reliability and Validity

Difficulty Level: Hard

67. A test developer was looking at the reliability and validity of the test she developed. She found that the coefficient alpha was -.88 and that the convergent validity coefficient was -.92. What can she say about the reliability and validity of her test?

a. The test is reliable and valid

b. The test is reliable but not valid

c. The test is not reliable but valid

d. The test is neither reliable nor valid

Learning Objective: 4, 6

Cognitive Domain: Analysis

Answer Location: Overview of Reliability and Validity

Difficulty Level: Hard

68. The difference between true scores and observed scores on tests is called

a. Error

b. Homogeneity

c. Heterogeneity

d. Reliability

Learning Objective: 3

Cognitive Domain: Comprehension

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Medium

69. Which of the following is a form of internal consistency reliability?

a. Test-retest

b. Interscorer

c. Split-half

d. Equivalent forms

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Internal Consistency Reliability

Difficulty Level: Medium

70. It is not a good idea to assess test-retest reliability by two tests that are given a few days apart because:

a. The correlation will be larger due to remembering how items were answered

b. The correlation will be smaller due to learning new things

c. Test-retest reliability is not assessed by giving the same test twice

d. To calculate test-retest reliability the two tests must be given 6 hours apart

Learning Objective: 5

Cognitive Domain: Application

Answer Location: Test-Retest Reliability

Difficulty Level: Medium

71. Clyde’s personality is being assessed in a laboratory setting. He is asked to interact with new people and his behavior is rated. This is an example of a(n):

a. Self-report measure

b. Projective measure

c. Performance measure

d. Achievement measure

Learning Objective: 5

Cognitive Domain: Analysis

Answer Location: Personality Tests

Difficulty Level: Medium

72. Anna is participating in research project. She is asked to look at ambiguous drawings and tell the researcher what she sees in the drawings. This is an example of a(n):

a. Self-report measure

b. Projective measure

c. Performance measure

d. Achievement measure

Learning Objective: 5

Cognitive Domain: Application

Answer Location: Test-Retest Reliability

Difficulty Level: Medium

73. A weekly spelling test is a(n):

a. Intelligence test

b. Projective measure

c. Aptitude test

d. Achievement measure

Learning Objective: 5

Cognitive Domain: Application

Answer Location: Test-Retest Reliability

Difficulty Level: Medium

74. A weekly spelling test is a(n):

a. Intelligence test

b. Projective test

c. Aptitude test

d. Achievement test

Learning Objective: 8

Cognitive Domain: Application

Answer Location: Achievement Tests

Difficulty Level: Medium

75. An achievement test that assesses reading, writing, mathematics, spelling, and science would be called a(n):

a. Intelligence test

b. Achievement battery

c. Aptitude test

d. Achievement test

Learning Objective: 8

Cognitive Domain: Knowledge

Answer Location: Achievement Tests

Difficulty Level: Medium

76. Aptitude tests measure:

a. Information that is acquired through the cumulative effects of experience

b. Specific information learned in formal and structured environments

c. Distinctive patterns of emotions, feelings, and actions

d. Ability to think abstractly and to easily learn from experience

Learning Objective: 8

Cognitive Domain: Comprehension

Answer Location: Intelligence Tests, Personality Tests, Achievement Tests, Aptitude Tests

Difficulty Level: Medium

True and False

1. Another name for coefficient alpha is Cronbach’s alpha.

a. True

b. False

Learning Objective: 5

Cognitive Domain: Knowledge

Answer Location: Internal Consistency Reliability

Difficulty Level: Easy

2. The same test can be valid with one group of people but invalid with a different group of people.

a. True

b. False

Learning Objective: 6

Cognitive Domain: Comprehension

Answer Location: Validity

Difficulty Level: Medium

3. Evidence based on content is often used in the process of validation.

a. True

b. False

Learning Objective: 7

Cognitive Domain: Comprehension

Answer Location: Evidence Based on Content

Difficulty Level: Easy

4. Reliability is a necessary but not sufficient condition for validity.

a. True

b. False

Learning Objective: 4, 6

Cognitive Domain: Comprehension

Answer Location: Overview of Reliability and Validity

Difficulty Level: Easy

5. Validity is a necessary but not sufficient condition for reliability.

a. True

b. False

Learning Objective: 4, 6

Cognitive Domain: Comprehension

Answer Location: Overview of Reliability and Validity

Difficulty Level: Easy

6. Coefficient alpha tells you the degree to which the items on a test are interrelated.

a. True

b. False

Learning Objective: 5

Cognitive Domain: knowledge

Answer Location: Internal Consistency Reliability

Difficulty Level: Easy

7. A reliability coefficient of .4 is sufficient for research purposes.

a. True

b. False

Learning Objective: 4

Cognitive Domain: Application

Answer Location: Reliability

Difficulty Level: Medium

8. Three researchers view the same interview and develop the same basic set of findings about the data. This would be an example of interscorer reliability.

a. True

b. False

Learning Objective: 5

Cognitive Domain: Application

Answer Location: Interscorer reliability

Difficulty Level: Medium

9. A researcher wishing to determine the consistency of scores on a test decided to use split-half reliability. To do so the researcher put items 1–50 in one group and items 51–100 in the second group. This is the recommended way to divide the items for this type of reliability measure.

a. True

b. False

Learning Objective: 5

Cognitive Domain: Application

Answer Location: Internal Consistency Reliability

Difficulty Level: Medium

10. An aptitude test focuses on information that is acquired through the informal learning that goes on in life.

a. True

b. False

Learning Objective: 8

Cognitive Domain: Comprehension

Answer Location: Aptitude Tests

Difficulty Level: Medium

11. All valid tests are reliable.

a. True

b. False

Learning Objective: 4, 6

Cognitive Domain: Comprehension

Answer Location: Overview of Reliability and Validity

Difficulty Level: Medium

12. All reliable tests are valid.

a. True

b. False

Learning Objective: 4, 6

Cognitive Domain: Comprehension

Answer Location: Overview of Reliability and Validity

Difficulty Level: Medium

13. Even with the best psychological and educational tests, scores are influenced by error.

a. True

b. False

Learning Objective: 3

Cognitive Domain: Comprehension

Answer Location: Assumptions Underlying Testing and Assessment

Difficulty Level: Medium

14. Intelligence is typically measured using self-report, projective, and performance measures.

a. True

b. False

Learning Objective: 8

Cognitive Domain: Comprehension

Answer Location: Intelligence Tests

Difficulty Level: Medium

15. Coefficient alpha can provide reliability and validity evidence.

a. True

b. False

Learning Objective: 5, 7

Cognitive Domain: Comprehension

Answer Location: Internal Consistency Reliability, Internal Structure Evidence

Difficulty Level: Hard

16. The predictive ability of preschool tests is as good as tests used for adults.

a. True

b. False

Learning Objective: 8

Cognitive Domain: Knowledge

Answer Location: Preshool tests

Difficulty Level: Easy

17. Achievement tests are used to predict success in graduate and professional schools

a. True

b. False

Learning Objective: 8

Cognitive Domain: Knowledge

Answer Location: Achievement Tests, Aptitude Tests

Difficulty Level: Easy

18. Aptitude tests measure how well people have learned what they have been taught.

a. True

b. False

Learning Objective: 8

Cognitive Domain: Knowledge

Answer Location: Achievement Tests, Aptitude Tests

Difficulty Level: Easy

Essay

1. Describe the characteristics of the four scales of measurement and give an example of each

Learning Objective: 2

Cognitive Domain: Knowledge

Answer Location: Scales of Measurement

Difficulty Level: Easy

2. Compare and contrast reliability and validity. Be sure to address the relationship between the two.

Learning Objective: 4, 5, 6, 7

Cognitive Domain: Analysis

Answer Location: Overview of Reliability and Validity, Reliability, Validity

Difficulty Level: Hard

3. How should you use information on reliability and validity in your research?

Learning Objective: 4, 6

Cognitive Domain: Comprehension

Answer Location: Using Reliability and Validity Information

Difficulty Level: Easy

4. Choose one type of reliability and one type of validity. For each on define it and give an example about how you would establish that type of reliability or validity.

Learning Objective: 4, 5, 6, 7

Cognitive Domain: Application

Answer Location: Overview of Reliability and Validity, Reliability, Validity

Difficulty Level: Medium

5. Compare and contrast achievement and aptitude tests.

Learning Objective: 8

Cognitive Domain: Comprehension

Answer Location: Aptitude Tests, Achievement Tests

Difficulty Level: Medium

Document Information

Document Type:
DOCX
Chapter Number:
7
Created Date:
Aug 21, 2025
Chapter Name:
Chapter 7 Standardized Measurement and Assessment
Author:
Robert Burke Johnson

Connected Book

Educational Research 6e Answer Key + Test Bank

By Robert Burke Johnson

Test Bank General
View Product →

$24.99

100% satisfaction guarantee

Buy Full Test Bank

Benefits

Immediately available after payment
Answers are available after payment
ZIP file includes all related files
Files are in Word format (DOCX)
Check the description to see the contents of each ZIP file
We do not share your information with any third party