Test Bank Chapter 7 Standardized Measurement And Assessment - Educational Research Quantitative Approaches 7e Bank by R. Burke Johnson. DOCX document preview.
Chapter 7: Standardized Measurement and Assessment
Test Bank
Multiple Choice
1. ______ is the process of assigning symbols or numbers to objects, events, people, or characteristics according to a specific set of rules.
A. Assessment
B. Evaluation
C. Measurement
D. Observation
Learning Objective: 7-1: Explain the meaning of measurement.
Cognitive Domain: Knowledge
Answer Location: Defining Measurement
Difficulty Level: Easy
2. Which is the correct order of the scales of measurement (from lowest level of quantification to highest level)?
A. nominal, ordinal, interval, ratio
B. ordinal, interval, ratio, nominal
C. interval, ordinal, ratio, nominal
D. ratio, nominal, ordinal, interval
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Comprehension
Answer Location: Scales of Measurement
Difficulty Level: Medium
3. Which one of the following is measured on a nominal scale?
A. scores on an IQ test.
B. class rank in high school
C. students’ gender
D. number of items correct on a spelling test
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Application
Answer Location: Nominal Scale
Difficulty Level: Hard
4. Which one of the following variables is measured on an ordinal scale?
A. class rank in high school
B. racial or ethnic group
C. state of residence
D. personality test score
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Application
Answer Location: Ordinal Scale
Difficulty Level: Hard
5. Which one of the following variables is measured on an interval scale?
A. Paul’s place in line
B. gender of participants
C. foreign language studied in high school
D. score on an IQ test
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Application
Answer Location: Interval Scale
Difficulty Level: Hard
6. Which of the following variables is measured on a ratio scale?
A. Mia’s score on a personality test
B. class valedictorian and salutatorian
C. Bill’s score on an achievement test
D. the time it took Betsy to finish an assignment, measured in seconds
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Application
Answer Location: Ratio Scale
Difficulty Level: Hard
7. An advantage of ratio and interval scales of measurement is that ______.
A. more powerful mathematics can be done on them
B. most variables used in educational research have ratio properties
C. they are used to make qualitative distinctions only
D. there are no mathematical advantages to ratio scales
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Application
Answer Location: Interval Scale | Ratio Scale
Difficulty Level: Hard
8. Which of the following is true about testing and assessment?
A. Testing and assessment are usually error-free.
B. Single assessments can be used to make important decisions.
C. It takes little work to create a quality assessment.
D. Psychological traits and states exist.
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Comprehension
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Medium
9. When making a major decision about an individual, how should tests and assessments be used?
A. Tests and assessments should never be used.
B. A single reliable and valid test can be used.
C. Multiple valid assessments should be used.
D. A single valid test is enough if administered properly by a professional.
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Comprehension
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Medium
10. The reliability of a test refers to ______.
A. the consistency or stability of the test scores
B. whether a test measures what it is supposed to measure
C. whether a test is valid
D. its content sampling
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Knowledge
Answer Location: Reliability
Difficulty Level: Easy
11. The consistency of a set of test scores over time is known as ______.
A. equivalent-forms reliability
B. split-half reliability
C. test–retest reliability
D. interscorer reliability
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Test–Retest Reliability
Difficulty Level: Easy
12. The accuracy of inferences or interpretations made from test scores is known as ______.
A. reliability
B. validity
C. stability
D. consistency
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Knowledge
Answer Location: Validity
Difficulty Level: Easy
13. When choosing a test or assessment procedure, you should choose one that is ______.
A. reliable only
B. valid only
C. neither reliable nor valid
D. both reliable and valid
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Comprehension
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
14. Error that is present every time an instrument is administered that biases the results is known as ______.
A. assessment error
B. testing error
C. systematic error
D. unregulated error
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Knowledge
Answer Location: Overview of Reliability and Validity
Difficulty Level: Easy
15. ______ refers to the consistency or stability of test scores.
A. Reliability
B. Validity
C. Convergent evidence
D. Interpolation
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Knowledge
Answer Location: Reliability
Difficulty Level: Easy
16. Which of the following statements about how contemporary scholars view validity is true?
A. There are many different types of validity.
B. There are different facets of validity, but validity is a unity concept.
C. Content validity is the only kind of evidence needed to establish validity.
D. Once validity is established, there is no need to collect additional evidence.
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Comprehension
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
17. Which of the following best captures what we mean when we say a test has validity?
A. The assessment gives consistent results.
B. The preponderance of evidence supports the inferences made from and about the test.
C. An expert states that the test measures what it is supposed to measure.
D. The test shows evidence is homogeneity.
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Comprehension
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
18. A test that is internally consistent is also said to be ______.
A. homogeneous
B. heterogeneous
C. multidimensional
D. valid
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Comprehension
Answer Location: Internal Consistency Reliability
Difficulty Level: Medium
19. Coefficient α is an index of which of the following?
A. test–retest reliability
B. interscorer reliability
C. equivalent-forms reliability
D. internal consistency reliability
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Internal Consistency Reliability
Difficulty Level: Easy
20. Test–retest reliability refers to ______.
A. whether test items are answered consistently within a test
B. whether scores on the first half of a test correlate with scores on the second half
C. whether scores on a test given at one point in time correlate with scores on the same test at a second testing
D. whether experts agree that a test measures what it is supposed to measure
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Test–Retest Reliability
Difficulty Level: Easy
21. Equivalent-forms reliability refers to ______.
A. the consistency of a group of individuals’ scores on two different forms of a test measuring the same thing
B. whether scores on the first half of a test correlate with scores on the second half
C. whether experts agree that a test measures what it is supposed to measure
D. whether a test is predictive of behavior
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Equivalent-Forms Reliability
Difficulty Level: Easy
22. The Spearman–Brown formula is used to ______.
A. correct validity coefficients
B. correct test–retest reliability coefficients
C. correct equivalent-forms reliability coefficients
D. correct the split-half reliability coefficient
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Internal Consistency Reliability
Difficulty Level: Easy
23. A general rule of thumb states that coefficient α should be ______.
A. greater than or equal to .05
B. greater than or equal to .70
C. less than .70
D. less than .05
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Internal Consistency Reliability
Difficulty Level: Easy
24. A researcher develops an assessment and after calculating coefficient α, he find the value is .23. Based on the α coefficient, what can you conclude about the assessment?
A. It is reliable.
B. It is valid.
C. It is reliable enough for research.
D. The assessment is unreliable.
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Application
Answer Location: Internal Consistency Reliability
Difficulty Level: Hard
25. As the number of items on a test increases, coefficient α tends to ______.
A. increase
B. decrease
C. stay the same
D. no longer be important
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Comprehension
Answer Location: Internal Consistency Reliability
Difficulty Level: Medium
26. Jane needs to calculate interscorer reliability for her study. She will do this by ______.
A. correlating the judgments of two raters
B. calculating coefficient α
C. calculating a test–retest correlation
D. calculating KR-20
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Application
Answer Location: Interscorer Reliability
Difficulty Level: Hard
27. Which of the following is a type of criterion-related validity evidence?
A. concurrent evidence
B. content evidence
C. expert judgments
D. factor analysis
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Comprehension
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Medium
28. Content-related validity evidence refers to ______.
A. judgment that the test is reliable
B. judgment that the test is internally consistent
C. correlation between scores on the test and a criterion behavior
D. judgment that the test adequately samples and represents the construct domain it purports to measure
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Knowledge
Answer Location: Evidence Based on Content
Difficulty Level: Easy
29. Criterion-related validity evidence refers to whether a test ______.
A. adequately samples the content it purports to measure
B. is predictive of a future or concurrent behavior
C. is internally consistent
D. meets the criterion for reliability
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Comprehension
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Medium
30. What is a criterion?
A. an index of content validity
B. the number of participants needed to do a validity study
C. the level of reliability needed for a test to reach an acceptable level
D. the staB
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Comprehension
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Medium
31. Concurrent validity evidence refers to ______.
A. the relationship between test scores and criterion scores obtained at the same time
B. whether scores on a test correlated with scores on the same test given a week later
C. whether a test predicts scores on some criterion measure (e.g., success in training)
D. whether a test has an adequate coefficient α
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Comprehension
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Medium
32. A researcher has developed a test and is working on collecting psychometric data so it can be published. Currently, she is collecting data about psychological and social consequences of using this test. She is doing this to ______.
A. establish discriminant validity
B. collect what is known as known groups evidence
C. establish consequential validity
D. establish convergent validity
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Analysis
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Medium
33. Dr. Spock has developed a test to identify students who are gifted. He administered the test to a group of students who were enrolled in a residential school for students gifted in math and science and a group of students who attended regular high schools. He found that students at the residential school scored higher than the students from regular high schools. These results show ______.
A. concurrent evidence
B. known groups evidence
C. content-related evidence
D. factor analytic evidence
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Analysis
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Medium
34. If a test measures a single construct, then ______.
A. the scores on items should correlate with the total score
B. the scores on items should not correlate with the total score
C. the test should not correlate with other measures of the same construct
D. there must be a reliable alternative form
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Analysis
Answer Location: Internal Consistency Reliability
Difficulty Level: Medium
35. A test is homogeneous if ______.
A. the test measures a single construct
B. the test has high test–retest reliability
C. the items do not correlate with the total score
D. there is a reliable alternative form
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Comprehension
Answer Location: Internal Consistency Reliability
Difficulty Level: Medium
36. Which of the following is an example of convergent validity evidence?
A. The test has a high coefficient α.
B. The test has a high test–retest correlation.
C. The test correlates highly with other tests of the same construct.
D. The test has a high KR-20 coefficient.
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Application
Answer Location: Evidence Based on Content
Difficulty Level: Medium
37. Professor X develops a test of emotional intelligence. Which of the following represent convergent and discriminant evidence?
A. The test correlates highly with another test of emotional intelligence and is not correlated with self-efficacy.
B. The test correlates highly with another test of emotional intelligence and is highly correlated with self-efficacy.
C. The test does not correlate with another test of emotional intelligence but does correlate with self-efficacy.
D. The test correlates with neither other tests of emotional intelligence nor tests of self-efficacy.
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Analysis
Answer Location: Evidence Based on Content
Difficulty Level: Medium
38. A test has a coefficient α of .75. This is evidence of ______.
A. interscorer reliability
B. alternate forms reliability
C. internal consistency reliability
D. test–retest reliability
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Comprehension
Answer Location: Internal Consistency Reliability
Difficulty Level: Medium
39. A group of supervisors agrees that the 16 items from a work performance instrument do measure issues important to a particular kind of work. This is evidence of ______.
A. criterion-related validity evidence
B. content-related validity evidence
C. convergent validity evidence
D. discriminant validity evidence
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Application
Answer Location: Evidence Based on Content
Difficulty Level: Hard
40. A validity coefficient is a correlation between ______.
A. scores and a relevant criterion
B. scores on the same test given 2 weeks apart to the same person
C. the scores on even and odd items on a test
D. scores on two separate forms of the same test
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Comprehension
Answer Location: Evidence Based on Content
Difficulty Level: Medium
41. A test correlates .90 with a test of the same construct, but .10 with a test of a different construct. This is evidence of ______.
A. criterion-related validity evidence
B. content-related validity evidence
C. internal consistency reliability
D. convergent and discriminant validity
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Analysis
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Medium
42. Factor analysis is used to determine ______.
A. the number of underlying dimensions measured by a test
B. test–retest reliability
C. interrater reliability
D. alternate forms reliability
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Knowledge
Answer Location: Evidence Based on Internal Structure
Difficulty Level: Easy
43. An employment test correlates .90 with work performance after 1 year. This is a type of ______.
A. criterion-related validity evidence
B. content-related validity evidence
C. internal consistency reliability
D. convergent/discriminant validity
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Application
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Hard
44. Intelligence tests typically measure ______.
A. the ability to think abstractly and learn from experience
B. state variables
C. learning styles
D. school achievement ability
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Intelligence Tests
Difficulty Level: Easy
45. The difference between achievement and aptitude tests is that ______.
A. achievement tests measure more the results of formal learning in a specific domain
B. aptitude test scores are unrelated to school performance
C. achievement test scores are influenced mostly by informal learning
D. achievement test scores do not predict school performance
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Achievement Tests | Aptitude Tests
Difficulty Level: Easy
46. A trait refers to ______.
A. how one acts when alone
B. an individual’s state in one particular situation
C. how someone feels during times of stress
D. enduring characteristics of people.
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Knowledge
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Easy
47. A scale of measurement that uses symbols, such as words or numbers to label, classify, or identify people or objects is which of the following?
A. nominal scale
B. ordinal scale
C. interval scale
D. ratio scale
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Knowledge
Answer Location: Nominal Scale
Difficulty Level: Easy
48. What is the name of the scale of measurement that has a true zero point as well as the characteristics of the rank ordering and equal distances between scale points?
A. nominal scale
B. ordinal scale
C. interval scale
D. ratio scale
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Knowledge
Answer Location: Ratio Scale
Difficulty Level: Easy
49. A scale of measurement that has equal distances between adjacent scale points but does not have a true zero point is known as ______.
A. nominal scale
B. ordinal scale
C. interval scale
D. ratio scale
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Knowledge
Answer Location: Interval Scale
Difficulty Level: Easy
50. A rank-order scale of measurement is ______.
A. nominal scale
B. ordinal scale
C. interval scale
D. ratio scale
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Knowledge
Answer Location: Ordinal Scale
Difficulty Level: Easy
51. The gathering and integrating of data to make educational evaluations is specifically called ______.
A. measurement
B. testing
C. assessment
D. survey
Learning Objective: 7-1: Explain the meaning of measurement.
Cognitive Domain: Knowledge
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Easy
52. Distinguishable, relatively enduring ways in which one individual differs from another are which of the following?
A. reliability
B. traits
C. states
D. systematic error
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Knowledge
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Easy
53. Ms. Green and Mr. Potter conduct behavioral observations for their school. When comparing results on the same children, Ms. Green’s ratings are always more negative than Mr. Potter’s. This is an example of ______.
A. random error
B. traits
C. states
D. systematic error
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Analysis
Answer Location: Overview of Reliability and Validity
Difficulty Level: Hard
54. The specific group for which the test publisher or researcher provides evidence for test validity and reliability is known as the ______.
A. test group
B. benchmark group
C. norming group
D. comparison group
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Comprehension
Answer Location: Using Reliability and Validity Information in Your Research
Difficulty Level: Medium
55. The consistency with which the items on a test measure a single construct is called ______.
A. reliability coefficient
B. test–retest reliability
C. equivalent-forms reliability
D. internal consistency
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Internal Consistency Reliability
Difficulty Level: Easy
56. Consistency of scores on a test over time is known as ______.
A. reliability coefficient
B. test–retest reliability
C. equivalent-forms reliability
D. internal consistency
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Test–Retest Reliability
Difficulty Level: Easy
57. Validity evidence based on the extent to which scores from a test can be used to predict or infer performance on some criterion such as a test or future performance is which of the following?
A. test–retest evidence.
B. concurrent evidence
C. criterion-related evidence
D. alternative forms evidence
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Knowledge
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Easy
58. Validity evidence based on the relationship between test scores collected at one point in time and criterion scores obtained at a later time is known as ______.
A. norm-related evidence
B. concurrent evidence
C. predictive evidence
D. discriminant evidence
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Knowledge
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Easy
59. Evidence that the scores on your focal test are unrelated to the scores from other tests that are designed to measure theoretically different constructs is known as ______.
A. criterion-related evidence
B. concurrent evidence
C. convergent evidence
D. discriminant evidence
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Knowledge
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Easy
60. Validity evidence based on the relationship between the focal test scores and independent measures of the same construct is known as ______.
A. criterion-related evidence
B. predictive evidence
C. convergent evidence
D. discriminant evidence
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Knowledge
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Easy
61. Validity evidence based on the relationship between test scores and criterion scores obtained at the same time is known as ______.
A. content-related evidence
B. concurrent evidence
C. predictive evidence
D. alternate forms evidence
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Knowledge
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Easy
62. The relatively permanent patterns that characterize and can be used to classify individuals is called ______.
A. intelligence
B. personality
C. homogeneity
D. IQ
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Personality Tests
Difficulty Level: Easy
63. In order to improve counseling services, a researcher wants to understand what affects high school students’ beliefs and feelings immediately after they have received low- or high-test scores. This researcher is planning to examine what?
A. traits
B. reliability
C. states
D. validity
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Analysis
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Medium
64. A researcher wants to understand what helps some students from disadvantaged backgrounds to excel throughout their school years while others do not. This researcher is planning to examine what?
A. traits
B. reliability
C. states
D. validity
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Analysis
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Medium
65. When constructing an instrument to measure attitudes toward exercise, a researcher had a panel of experts examine the items generated to make sure the items adequately represented the domain of interest. This would be an example of ______.
A. reliability analysis
B. factory analysis
C. content-related evidence validity
D. validity coefficient
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Application
Answer Location: Evidence Based on Content
Difficulty Level: Hard
66. A researcher designs a study in which newly married couples are given an instrument that the researcher thinks determines compatibility. She designs the study so that she can examine whether the results of compatibility questionnaire can determine the likelihood that the couples will divorce within 5 years of marriage. The results of this study would provide what kind of evidence concerning the validity of the compatibility instrument?
A. concurrent
B. content
C. discriminant
D. predictive
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Application
Answer Location: Evidence Based on Relations to Other Variables
Difficulty Level: Hard
67. If a researcher collects data using an instrument from a group of participants who vary greatly from the instrument’s norming group, what does this mean?
A. The study is only moderately valid.
B. Evidence from the instrument presented in the study is suspected.
C. You can assume that the instrument still worked well.
D. The study had a high interscorer reliability correlation.
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Application
Answer Location: Using Reliability and Validity Information in Your Research
Difficulty Level: Hard
68. A research team is developing a test of school readiness. They believe that there are four dimensions of school readiness. They completed a factor analysis on their data. If the test was developed well, how many factors should have been evident in the factor analysis?
A. one
B. three
C. four
D. There is not enough information to tell.
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Analysis
Answer Location: Evidence Based on Internal Structure
Difficulty Level: Medium
69. A test developer was looking at the reliability and validity of the test she developed. She found that the coefficient α was .49 and that the convergent validity coefficient was .06. What can she say about the reliability and validity of her test?
A. The test is reliable and valid.
B. The test is reliable but not valid.
C. The test is not reliable but valid.
D. The test is neither reliable nor valid.
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Analysis
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
70. The difference between true scores and observed scores on tests is called ______.
A. error
B. homogeneity
C. heterogeneity
D. reliability
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Comprehension
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Medium
71. Which of the following is a form of internal consistency reliability?
A. test–retest
B. interscorer
C. split-half
D. equivalent forms
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Internal Consistency Reliability
Difficulty Level: Easy
72. It is not a good idea to assess test–retest reliability a few days apart because ______.
A. the coefficient might be inflated because respondents remember how they responded on the first testing
B. the correlation will be smaller because individuals change over time
C. Cronbach’s α will be decreased
D. test–retest reliability must be carried out on the same day
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Application
Answer Location: Test–Retest Reliability
Difficulty Level: Hard
73. Clyde’s personality is being assessed in a laboratory setting. He is asked to interact with new people and his behavior is rated. This is an example of a(n) ______.
A. self-report measure
B. projective measure
C. performance measure
D. achievement measure
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Analysis
Answer Location: Personality Tests
Difficulty Level: Medium
74. Anna is participating in research project. She is asked to look at ambiguous drawings and tell the researcher what she sees in the drawings. This is an example of a(n) ______.
A. self-report measure
B. projective measure
C. performance measure
D. achievement measure
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Application
Answer Location: Test–Retest Reliability
Difficulty Level: Hard
75. A weekly spelling test is a(n) ______.
A. intelligence test
B. projective test
C. aptitude test
D. achievement test
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Application
Answer Location: Achievement Tests
Difficulty Level: Hard
76. An achievement test that assesses reading, writing, mathematics, spelling, and science would be called a(n) ______.
A. intelligence test
B. achievement battery
C. aptitude test
D. achievement test
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Achievement Tests
Difficulty Level: Easy
77. Aptitude tests measure ______.
A. information that is acquired through the cumulative effects of experience
B. specific information learned in formal and structured environments
C. distinctive patterns of emotions, feelings, and actions
D. ability to think abstractly and to easily learn from experience
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Comprehension
Answer Location: Intelligence Tests | Personality Tests | Achievement Tests | Aptitude Tests
Difficulty Level: Medium
78. Maria is looking to measure mathematics anxiety in elementary school students in her study. What would be the best strategy for coming up with a measure?
A. create her own instrument
B. use her friend’s scale of math anxiety normed on college students
C. consult the Mental Measurements Yearbook or Tests in Print to find appropriate measures for fifth graders.
D. use a measure that was found on an Internet site because it is easy to use even though there is no knowledge of its reliability and validity
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Application
Answer Location: Sources of Information About Tests
Difficulty Level: Hard
True/False
1. Another name for coefficient α is Cronbach’s α.
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Internal Consistency Reliability
Difficulty Level: Easy
2. The same test can be valid with one group of people but invalid with a different group of people.
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Comprehension
Answer Location: Validity
Difficulty Level: Medium
3. Evidence based on content is often used in the process of validation.
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Comprehension
Answer Location: Evidence Based on Content
Difficulty Level: Medium
4. Reliability is a necessary but not sufficient condition for validity.
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Comprehension
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
5. Validity is a necessary but not sufficient condition for reliability.
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Comprehension
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
6. Coefficient α tells you the degree to which the items on a test are interrelated.
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Knowledge
Answer Location: Internal Consistency Reliability
Difficulty Level: Easy
7. A reliability coefficient of .4 is sufficient for research purposes.
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Application
Answer Location: Reliability
Difficulty Level: Hard
8. Three researchers view the same interview and develop the same basic set of findings about the data. This would be an example of interscorer reliability.
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Application
Answer Location: Interscorer Reliability
Difficulty Level: Hard
9. A researcher wishing to determine the consistency of scores on a test decided to use split-half reliability. To do so the researcher put items 1–50 in one group and items 51–100 in the second group. This is the recommended way to divide the items for this type of reliability measure.
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Application
Answer Location: Internal Consistency Reliability
Difficulty Level: Medium
10. An aptitude test focuses on information that is acquired through the informal learning that goes on in life.
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Comprehension
Answer Location: Aptitude Tests
Difficulty Level: Medium
11. All valid tests are reliable.
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Comprehension
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
12. All reliable tests are valid.
Learning Objective: 7-4: Explain the meaning of reliability.
Cognitive Domain: Comprehension
Answer Location: Overview of Reliability and Validity
Difficulty Level: Medium
13. Even with the best psychological and educational tests, scores are influenced by error.
Learning Objective: 7-3: Articulate the seven assumptions underlying testing and assessment.
Cognitive Domain: Comprehension
Answer Location: Assumptions Underlying Testing and Assessment
Difficulty Level: Medium
14. Intelligence is typically measured using self-report, projective, and performance measures.
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Comprehension
Answer Location: Intelligence Tests
Difficulty Level: Medium
15. Coefficient α can provide reliability and validity evidence.
Learning Objective: 7-5: Explain the characteristics of each of the methods for computing reliability.
Cognitive Domain: Comprehension
Answer Location: Internal Consistency Reliability
Difficulty Level: Hard
16. The predictive ability of preschool tests is as good as tests used for adults.
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Preschool Assessment Tests
Difficulty Level: Easy
17. Achievement tests are used to predict success in graduate and professional schools
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Achievement Tests | Aptitude Tests
Difficulty Level: Easy
18. Aptitude tests measure how well people have learned what they have been taught.
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Achievement Tests | Aptitude Tests
Difficulty Level: Easy
19. Validation of a test ends after a test is published.
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Comprehension
Answer Location: Validity
Difficulty Level: Medium
20. Contemporary views of validity consider it a unitary concept.
Learning Objective: 7-6: Explain the meaning of validity and validity evidence.
Cognitive Domain: Comprehension
Answer Location: Validity
Difficulty Level: Medium
21. Validity depends on the strength of the inferences that can be drawn from data surrounding an assessment.
Learning Objective: Explain the meaning of validity and validity evidence
Cognitive Domain: Comprehension
Answer Location: Validity
Difficulty Level: Medium
22. A useful source for information about published tests and assessments is the Mental Measurements Yearbook
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Knowledge
Answer Location: Sources of Information About Tests
Essay
1. Describe the characteristics of the four scales of measurement and give an example of each.
Learning Objective: 7-2: Explain the different scales of measurement, including the type of information communicated by each one.
Cognitive Domain: Knowledge
Answer Location: Scales of Measurement
Difficulty Level: Easy
2. Compare and contrast reliability and validity. Be sure to address the relationship between the two.
Learning Objective: Multiple--all objectives related to reliability and validity
Cognitive Domain: Analysis
Answer Location: Overview of Reliability and Validity | Reliability | Validity
Difficulty Level: Hard
3. How should you use information on reliability and validity in your research?
Learning Objective: Multiple—all objectives related to reliability and validity
Cognitive Domain: Comprehension
Answer Location: Using Reliability and Validity Information in Your Research
Difficulty Level: Medium
4. Choose one type of reliability and one type of validity. For each one define it and give an example about how you would establish that type of reliability or validity.
Learning Objective: Multiple—all objectives related to reliability and validity
Cognitive Domain: Application
Answer Location: Overview of Reliability and Validity, Reliability, Validity
Difficulty Level: Medium
5. Compare and contrast achievement and aptitude tests.
Learning Objective: 7-8: Identify the different types of standardized tests and the sources of information on these tests.
Cognitive Domain: Comprehension
Answer Location: Aptitude Tests | Achievement Tests
Difficulty Level: Medium
6. In carrying out research studies, researchers often have to decide between using already existing measures and creating their own measures. Based on knowledge of the reliability and validity of assessments, what are the pros and cons of creating your own measures versus using one that has already been developed?
Learning Objective: Multiple—all objectives related to reliability and validity
Cognitive Domain: Application
Answer Location: Overview of Reliability and Validity | Reliability | Validity
Difficulty Level: Hard
7. Describe the concept of consequential validity and discuss the types of evidence that would be informative about the consequential validity of an assessment.
Learning Objective: 7-7: Explain the different methods of collecting validity evidence.
Cognitive Domain: Comprehension
Answer Location: Validity
Difficulty Level: Medium
Document Information
Connected Book
Educational Research Quantitative Approaches 7e Bank
By R. Burke Johnson