Ch.2 – The Process And Problems Of – Test Bank + Answers - Criminology Research 4e | Test Bank by Ronet D. Bachman by Ronet D. Bachman. DOCX document preview.

Ch.2 – The Process And Problems Of – Test Bank + Answers

Chapter 5: Sampling

TEST BANK

MULTIPLE CHOICE

  1. Another name for convenience sampling is: (5-13)
  2. Judgmental sampling
  3. Purposive sampling
  4. Availability Sampling
  5. Simple random sampling

ANS [C]

LOC: Nonprobability

TIP: Availability Sampling

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. If the probability of selection cannot be determined before a sample is drawn, what type of sampling procedure must be used? (5-13)
  2. Lottery procedure
  3. Random number table
  4. Non-probability sampling
  5. Convenience sampling

ANS [C]

LOC: Sampling Methods

TIP: Nonprobability Sampling Methods

[LO 3]

COG [Comprehension]

DIF [Medium]

  1. The list of all elements of a population from which a sample is actually selected is known as the (5-2)
  2. Sample frame
  3. Population
  4. Sampling unit
  5. Sampling interval

ANS [A]

LOC: Sample Planning

TIP: Define Sample Components and the Population

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. Samantha asked the registrar at her school to provide a list of all criminal justice majors. She then selected every fifth name on the list. What sampling method did she use? (5-10)
  2. Simple random
  3. Stratified random
  4. Systematic random
  5. Convenience

ANS [C]

LOC: Probability Sampling Methods

TIP: Systematic Random Sampling

[LO 4]

COG [Analysis]

DIF [Hard]

  1. Which of the following is FALSE about a probability sample? (5-7)
  2. The probability of selection is known for all elements
  3. The sampling frame must be fully identified
  4. Chance determines the selection of elements
  5. Elements are chosen haphazardly

ANS [D]

LOC: Sampling Methods

TIP: Sampling Methods

[LO 3]

COG [Application]

DIF [Medium]

  1. The individual entities of the population whose characteristics are to be measured are the (5-2)
  2. sample
  3. enumeration units
  4. target population
  5. elements

ANS [D]

LOC: Sample Planning

TIP: Sample Planning

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. When a conclusion based on a sample of a larger population holds true for that population is known as (5-3)
  2. Cross-population generalizability
  3. Sample generalizability
  4. Sampling error
  5. None of the above

ANS [B]

LOC: Sample Planning

TIP: Evaluate Generalizability

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. The larger the sampling error, the _______ representative the sample is of the population. (5-4)
  2. More
  3. Less
  4. Moderately
  5. Both A and C

ANS [B]

LOC: Sample Planning

TIP: Evaluate Generalizability

[LO 6]

COG [Analysis]

DIF [Hard]

  1. The most important distinction made about samples is whether they are based on a ___________ or _____________ sampling method (5-6)
  2. Haphazard; Ineligible
  3. Probability; Nonprobability
  4. Realistic; Unrealistic
  5. Biased; Unbiased

ANS [B]

LOC: Sampling Methods

TIP: Sampling Methods

[LO 3]

COG [Comprehension]

DIF [Easy]

  1. Probability methods randomly select elements and therefore have no (5-8)
  2. Systematic bias
  3. Non-systematic bias
  4. Random bias
  5. Homogeneous bias

ANS [A]

LOC: Sampling Methods

TIP: Probability Sampling Methods

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. Overrepresentation of some population characteristic in sampling is (5-8)
  2. Non-random bias
  3. Systematic bias
  4. Expected
  5. Homogeneous

ANS [B]

LOC: Sampling Methods

TIP: Probability Sampling Methods

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. A random number table simplifies the process of choosing cases on the basis of (5-9)
  2. Chance
  3. A specified characteristic
  4. The size of the population
  5. A sampling interval

ANS [A]

LOC: Probability Sampling Methods

TIP: Simple Random Sampling

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. A method of sampling in which sample elements are returned to the sampling frame after selection so they may be sampled again is (5-9)
  2. Double-dipping sampling
  3. Replacement sampling
  4. Selection sampling
  5. None of the above

ANS [B]

LOC: Probability Sampling Methods

TIP: Simple Random Sampling

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. A variant of simple random sampling is where you would select the first element randomly, then (5-10)
  2. Every nth element thereafter
  3. Every fourth element thereafter
  4. Chosen based on a random selection method thereafter
  5. Chosen within an unknown interval

ANS [A]

LOC: Probability Sampling Methods

TIP: Systematic Random Sampling

[LO 4]

COG [Knowledge]

DIF [Easy]

  1. When a sequence of elements in a list to be sampled varies in some regular, periodic fashion it is affected by (5-10)
  2. Randomness
  3. Biasocity
  4. Periodicity
  5. None of the above

ANS [C]

LOC: Probability Sampling Methods

TIP: Systematic Random Sampling

[LO 4]

COG [Comprehension]

DIF [Easy]

  1. The method of sampling in which sample elements are selected separately from population strata identified in advance by the researcher is (5-11)
  2. Simple random sampling
  3. Systematic random sampling
  4. Stratified random sampling
  5. Convenience random sampling

ANS [C]

LOC: Probability Sampling Methods

TIP: Stratified Random Sampling

[LO 4]

COG [Comprehension]

DIF [Medium]

  1. The sampling method where elements are selected from strata in exact proportion to their representation in the population is (5-11)
  2. Stratified random sampling
  3. Systematic random sampling
  4. Proportionate stratified sampling
  5. Disproportionate stratified sampling

ANS [C]

LOC: Probability Sampling Methods

TIP: Stratified Random Sampling

[LO 4]

COG [Comprehension]

DIF [Medium]

  1. Because multistage cluster sampling requires less prior information about size of strata in the population it may be useful when (5-12)
  2. A sampling frame is not available
  3. There is no list available of a population to be studied
  4. The population is spread across a wide geographic area
  5. All of the above

ANS [D]

LOC: Probability Sampling Methods

TIP: Multistage Cluster Sampling

[LO 4]

COG [Analysis]

DIF [Medium]

  1. A common nonprobability sampling method is (5-13)
  2. Systematic sampling
  3. Snowball sampling
  4. Availability sampling
  5. Both B and C are nonprobability sampling methods

ANS [D]

LOC: Sampling Methods

TIP: Nonprobability Sampling Methods

[LO 4]

COG [Comprehension]

DIF [Easy]

  1. In availability sampling, elements are selected (5-13)
  2. Because they will be generalizable
  3. Because they are easy to find
  4. On the basis of naturally occurring aggregates of elements of the population
  5. Because they are in proportion to the population

ANS [B]

LOC: Nonprobability Sampling Methods

TIP: Availability Sampling

[LO 4]

COG [Comprehension]

DIF [Medium]

  1. A nonprobability sampling method in which elements are selected to ensure that the sample represents certain characteristics in proportion to their prevalence in the population is (5-14)
  2. Proportionate stratified sampling
  3. Quota sampling
  4. Purposive sampling
  5. Snowball sampling

ANS [B]

LOC: Nonprobability Sampling Methods

TIP: Quota Sampling

[LO 4]

COG [Application]

DIF [Medium]

  1. When a researcher is studying families, the researcher is using what units of analysis? (5-19)
  2. Individual
  3. Family
  4. Group
  5. Corporate

ANS [C]

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: Individual and Group Units of Analysis

[LO 6]

COG [Comprehension]

DIF [Easy]

  1. A researcher who draws conclusions about individual-level processes from group-level data is constructing (5-19)
  2. Reductionism
  3. An ecological fallacy
  4. An individual propensity fallacy
  5. A reductionist fallacy

ANS [B]

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: The Ecological Fallacy and Reductionism

[LO 6]

COG [Comprehension]

DIF [Medium]

  1. Quota sampling is intended to overcome ____________ sampling’s biggest downfall: the likelihood that the sample will only consist of who or what is available without regard for similarity to the population. (5-14)
  2. Snowball
  3. Purposive
  4. Availability
  5. Probability

ANS [C]

LOC: Nonprobability Sampling Methods

TIP: Availability Sampling

[LO 3]

COG [Application]

DIF [Medium]

  1. In___________ sampling, elements are selected often due to their unique position in a population. (5-15)
  2. Systematic
  3. Purposive
  4. Probability
  5. Stratified

ANS [B]

LOC: Nonprobability Sampling Methods

TIP: Purposive or Judgment Sampling

[LO 3]

COG [Comprehension]

DIF [Medium]

  1. Rubin and Rubin (1995) suggest three guidelines for selecting informants when designing a purposive sampling strategy. What is one of these guidelines? (5-15)
  2. Informants should be knowledgeable about the cultural arena or situation or experience being studied
  3. Informant are selected in advance by the researcher.
  4. Informants should not be representative of the range of points of view
  5. Elements are selected to ensure that the sample represents certain characteristics in proportion to their prevalence in the population.

ANS [A]

LOC: Nonprobability Sampling

TIP: Purposive or Judgment Sampling

[LO3]

COG [Analysis]

DIF [Hard]

  1. A sampling technique in which all elements in the population are differentiated on the basis of their value on some relevant characteristic and then sorted into strata is (5-16)
  2. Stratified random sampling
  3. Purposive sampling
  4. Quota sampling
  5. Snowball sampling

ANS [A]

LOC: Probability Sampling Methods

TIP: Stratified Random Sampling

[LO 4]

COG [Application]

DIF [Hard]

  1. If you complete a survey in a magazine entitled “What Do You Think About the Death Penalty for Teenagers? and mail it in to the publisher, you have participated in survey research (5-13)
  2. With a probability sample
  3. With an availability sample
  4. With a systematic random sample
  5. None of the above

ANS [B]

LOC: Nonprobability Sampling Methods

TIP: Availability Sampling

[LO 5]

COG [Comprehension]

DIF [Easy]

  1. In which type of sampling method is the probability of selection of every case is known but unequal between strata? (5-11)
  2. Availability
  3. Systematic
  4. Proportionate Stratified
  5. Disproportionate Stratified

ANS [D]

LOC: Probability Sampling Methods

TIP: Stratified Random Sampling

[LO 4]

COG [Comprehension]

DIF [Medium]

  1. Ways to increase the generalizability of a qualitative research sample include (5-17)
  2. Choose sites randomly
  3. Choose several heterogeneous sites rather than only one site choice
  4. Researchers do not need to worry about increasing generalizability of a qualitative research sample
  5. It is not possible to measure generalizability in a qualitative research study

ANS [B]

LOC: Nonprobability Sampling Methods

TIP: Generalizability in Qualitative Research

[LO6]

COG [Application]

DIF [Hard]

  1. In most social science research, including criminological studies, the units of analysis are (5-18)
  2. Families
  3. Isolated
  4. Individuals
  5. Groups

ANS [C]

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: Individual and Group Units of Analysis

[LO 6]

COG [Knowledge]

DIF [Easy]

TRUE/FALSE

  1. A simple random sample of students could be achieved by stopping every other student who enters the library. (5-9)
  2. TRUE
  3. FALSE

ANS [B]

LOC: Probability Sampling Methods

TIP: Simple Random Sampling

[LO 4]

COG [Knowledge]

DIF [Medium]

  1. Reductionism is an error in reasoning that occurs when incorrect conclusions about group-level processes are based on individual level data. (5-20)
  2. TRUE
  3. FALSE

ANS [A]

TIP: The Ecological Fallacy and Reductionism

[LO 6]

COG [Knowledge]

DIF [Medium]

  1. Cross-population generalizability is when the findings from a study of one population can be generalized to another population. (5-2)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Sample Planning

TIP: Evaluate Generalizability

[LO 3]

COG [Synthesis]

DIF [Easy]

  1. The set of individuals or other entities to which we want to be able to generalize our findings is the population. (5-3)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Sample Planning

TIP: Define Sample Components and the Population

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. A list of students that is obtained from the registrar’s office from which elements of the population are selected is the enumeration unit. (5-2)
  2. TRUE
  3. FALSE

ANS [B]

LOC: Sample Planning

TIP: Define Sample Components and the Population

[LO 2]

COG [Application]

DIF [Hard]

  1. When the researcher can show that findings about one group holds true for another group, it has cross-population generalizability. (5-3)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Sample Planning

TIP: Evaluate Generalizability

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. Sample generalizability and cross-population generalizability mean the same. (5-3)
  2. TRUE
  3. FALSE

ANS [B]

LOC: Sample Planning

TIP: Evaluate Generalizability

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. The quality of a sample cannot be evaluated if it is unclear what population it is supposed to represent. (5-17)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Sampling Methods

TIP: Lessons About Sample Quality

[LO 2]

COG [Analysis]

DIF [Medium]

  1. Choosing sites on the basis of their fit with a typical situation is far preferable to choosing on the basis of convenience. (5-17)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Sampling Methods

TIP: Generalizability in Qualitative Research

[LO 2]

COG [Knowledge]

DIF [Medium]

  1. Most social science research, including criminological studies, use groups as the units of analysis. (5-18)
  2. TRUE
  3. FALSE

ANS [B]

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: Individual and Group Units of Analysis

[LO 6]

COG [Knowledge]

DIF [Easy]

  1. The cases about which measures actually are obtained in a sample are known as units of observation. (5-19)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: The Ecological Fallacy and Reductionism

[LO 6]

COG [Comprehension]

DIF [Hard]

  1. A reductionist fallacy is an error in reasoning that occurs when incorrect conclusions about group-level processes are based on individual-level data. (5-20)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: The Ecological Fallacy and Reductionism

[LO 6]

COG [Comprehension]

DIF [Medium]

  1. A researcher who draws conclusions about individual-level processes from group-level data is constructing an ecological fallacy. (5-20)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: The Ecological Fallacy and Reductionism

[LO6 ]

COG [Analysis]

DIF [Hard]

  1. Snowball sampling is useful for hard-to-reach interconnected populations where some members of the population do not know each other. (5-16)
  2. TRUE
  3. FALSE

ANS [B]

LOC: Nonprobability Sampling Methods

TIP: Snowball Sampling

[LO 5]

COG [Comprehension]

DIF [Medium]

  1. In proportionate stratified sampling, the proportion of each stratum that is included in the sample is intentionally varied from what is in the population. (5-11)
  2. TRUE
  3. FALSE

ANS [B]

LOC: Probability Sampling Methods

TIP: Stratified Random Sampling

[LO 4]

COG [Analysis]

DIF [Hard]

  1. Systematic random sampling is less time-consuming than simple random sampling. (5-10)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Probability Sampling Methods

TIP: Systematic Random Sampling

[LO 4]

COG [Application]

DIF [Medium]

  1. In replacement sampling each element is returned to the sampling frame from which it is selected so that it may be sampled again. (5-9)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Probability Sampling Methods

TIP: Simple Random Sampling

[LO 4]

COG [Knowledge]

DIF [Medium]

  1. Simple random sampling requires a procedure that generates numbers strictly on the basis of chance. (5-9)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Probability Sampling Methods

TIP: Simple Random Sampling

[LO 4]

COG [Knowledge]

DIF [Medium]

  1. A naturally occurring mixed aggregate of elements of a population is a nonrepresentative application. (5-12)
  2. TRUE
  3. FALSE

ANS [B]

LOC: Probability Sampling Methods

TIP: Multistage Cluster Sampling

[LO 4]

COG [Application]

DIF [Medium]

  1. Rubin and Rubin (1995) suggest continuing to select interviewees until you can pass two tests (completeness and saturation) in order to ensure that a purposive sample adequately represents the setting or issues being studied. (5-15)
  2. TRUE
  3. FALSE

ANS [A]

LOC: Nonprobabililty Sampling

TIP: Availability Sampling

[LO 5]

COG [Evaluation]

DIF [Medium]

ESSAY

  1. What are the circumstances that make sampling unnecessary? Why are those circumstances rare?

A representative sample is a sample that looks similar to the population from which it was selected in all respects that are potentially relevant to the study. The distribution of characteristics among the elements of a representative sample is the same as the distribution of those characteristics among the total population. In an unrepresentative sample, some characteristics are overrepresented or underrepresented.

What about people? Certainly all people are not identical—nor are animals, in many respects. Nonetheless, if we are studying physical or psychological processes that are the same among all people, sampling is not needed to achieve generalizable findings. Various types of psychologists, including social psychologists, often conduct experiments on college students to learn about processes that they think are identical for all individuals. They believe that most people will have the same reactions as the college students if they experience the same experimental conditions. Field researchers who observe group processes in a small community sometimes make the same assumption.

There is a potential problem with this assumption, however. There is no way to know whether the processes being studied are identical for all people. In fact, experiments can give different results depending on the type of people studied or the conditions for the experiment. Milgram’s (1965) experiments on obedience to authority (discussed in Chapter 3) illustrate this point very well. Recall that Milgram concluded that people are very obedient to authority. But were these results generalizable to all men, to men in the United States, or to men in New Haven? We can have confidence in these findings because similar results were obtained in many replications of the Milgram experiments when the experimental conditions and subjects were similar to those studied by Milgram.

Accurately generalizing the results of experiments and of participant observation is risky because such research often studies a small number of people who do not represent a particular population. Researchers may put aside concerns about generalizability when they observe the social dynamics of specific clubs or college dorms or a controlled experiment that tests the effect of, say, a violent movie on feelings for others. Nonetheless, we should still be cautious about generalizing the results of such studies.

The important point is that social scientists rarely can skirt the problem of demonstrating the generalizability of their findings. If a small sample has been studied in an experiment or field research project, the study should be replicated in different settings or, preferably, with a representative sample of the population for which the generalizations are sought (see Exhibit 5.3).

The people in our social world are just too diverse to be considered identical units. Social psychological experiments and small field studies have produced good social science, but they need to be replicated in other settings with other subjects to claim any generalizability. Even when we believe that we have uncovered basic social processes in a laboratory experiment or field observation, we must seek confirmation in other samples and other research.

LOC: Sample Planning

TIP: Assess Population Diversity

[LO 1]

COG [Evaluate]

DIF [Medium]

  1. What is the relationship between the desired sample, the obtained sample, the sampling frame, and sample quality?
  • We cannot evaluate the quality of a sample if we do not know what population it is supposed to represent. If the population is unspecified because the researchers were never clear about just what population they were trying to sample, then we can safely conclude that the sample itself is no good.
  • We cannot evaluate the quality of a sample if we do not know exactly how cases in the sample were selected from the population. If the method was specified, we then need to know whether cases were selected in a systematic fashion or on the basis of chance. In any case, we know that a haphazard method of sampling (as in person-on-the-street interviews) undermines generalizability.
  • Sample quality is determined by the sample actually obtained, not just by the sampling method itself. That is, findings are only as generalizable as the sample from which they are drawn. If many of the people (or other elements) selected for our sample do not respond or participate in the study, even though they have been selected for the sample, generalizability is compromised.
  • We need to be aware that even researchers who obtain very good samples may talk about the implications of their findings for some group that is larger than or just different from the population they actually sampled. For example, findings from a representative sample of students in one university often are discussed as if they tell us about university students in general. Maybe they do; the problem is, we just don’t know.

LOC: Sampling Methods

TIP: Lessons About Sample Quality

[LO 2]

COG [Evaluation]

DIF [Medium]

  1. Define and distinguish probability and non-probability sampling. What is the relationship between the techniques and sample generalizability?

Probability sampling methods rely on a random selection procedure. In principle, this is the same as flipping a coin to decide which person wins and which one loses. Heads and tails are equally likely to turn up in a coin toss, so both persons have an equal chance to win. That chance, or the probability of selection, is 1 out of 2, or.5.

Flipping a coin is a fair way to select 1 of 2 people because the selection process harbors no systematic bias. You might win or lose the coin toss, but you know that the outcome was due simply to chance, not to bias (unless your opponent tossed a two-headed coin!). For the same reason, rolling a six-sided die is a fair way to choose 1 of 6 possible outcomes (the odds of selection are 1 out of 6, or.17). Similarly, state lotteries use a random process to select winning numbers. Thus, the odds of winning a lottery—the probability of selection—are known even though they are very small (perhaps 1 out of 1 million) compared with the odds of winning a coin toss. As you can see, the fundamental strategy in probability sampling is the random selection of elements into the sample. When a sample is randomly selected from the population, every element has a known and independent chance of being selected into the sample.

There is a natural tendency to confuse the concept of probability, in which cases are selected only on the basis of chance, with a haphazard method of sampling. On first impression, leaving things up to chance seems to imply the absence of control over the sampling method. But to ensure that nothing but chance influences the selection of cases, the researcher must actually proceed very methodically and leave nothing to chance except the selection of the cases themselves. The researcher must carefully follow controlled procedures if a purely random process is to occur. In fact, when reading about sampling methods, do not assume that a random sample was obtained just because the researcher used a random selection method at some point in the sampling process. Look for these two particular problems: selecting elements from an incomplete list of the total population and failing to obtain an adequate response rate (say, only 45% of the people who were asked to participate actually agreed).

If the sampling frame, or list from which the elements of the population are selected, is incomplete, a sample selected randomly from the list will not be random. How can it be when the sampling frame fails to include every element in the population? Even for a simple population, such as a university’s student body, the registrar’s list is likely to be at least a bit out of date at any given time. For example, some students will have dropped out, but their status will not yet be officially recorded. Although you may judge the amount of error introduced in this particular situation to be negligible, the problems are greatly compounded for a larger population. The sampling frame for a city, state, or nation is always likely to be incomplete because of constant migration into and out of the area. Even unavoidable omissions from the sampling frame can bias a sample against particular groups within the population.

A very inclusive sampling frame may still yield systematic bias if many sample members cannot be contacted or refuse to participate. Nonresponse is a major hazard in survey research because individuals who do not respond to a survey are likely to differ systematically from those who take the time to participate. You should not assume that findings from a randomly selected sample will be generalizable to the population from which the sample was selected if the rate of nonresponse is considerable (certainly if it is much above 30%).

LOC: Sampling Methods

TIP: Sampling Methods

[LO 3]

COG [Analysis]

DIF [Hard]

  1. Define and explain the major types of probability sampling.

Simple random sampling requires a procedure that generates numbers or identifies cases strictly on the basis of chance. As you know, flipping a coin and rolling a die can be used to identify cases strictly on the basis of chance, but these procedures are not very efficient tools for drawing a sample. A random number table, which can be found on many websites, simplifies the process considerably. The researcher numbers all the elements in the sampling frame and then uses a systematic procedure for picking corresponding numbers from the random number table. Alternatively, a researcher may use a lottery procedure. Each case number is written on a small card, and then the cards are mixed up and the sample selected from the cards.

When a large sample must be generated, these procedures are very cumbersome. Fortunately, a computer program can easily generate a random sample of any size. The researcher must first number all the elements to be sampled (the sampling frame) and then run the computer program to generate a random selection of the numbers within the desired range. The elements represented by these numbers are the sample.

As the percentage of the population that has only cell phones has increased (40% in 2013), it has become essential to explicitly sample cell phone number as well as landlines. Those who use cell phones only tend to be younger, male, and single and more likely to be black or Hispanic. As a result, failing to include cell phone numbers in a phone survey can introduce bias (Christian, Keeter, Purcell, & Smith, 2010). In the National Intimate Partner and Sexual Violence Survey (NISVS) conducted by the Centers for Disease Control and Prevention (CDC), both landline and cell phone databases of adult U.S. residents were selected through a random digit dialing (RDD) random sampling method (Black et al., 2011). You will learn more about this survey in Chapter 7.

Simple random sampling is a method of sampling in which every sample element is selected only on the basis of chance, through a random process.

Random number table is a table containing lists of numbers that are ordered solely on the basis of chance; it is used for drawing a random sample.

Random digit dialing (RDD) is the random dialing by a machine of numbers within designated phone prefixes, which creates a random sample for phone surveys.

Organizations that conduct phone surveys often draw random samples with RDD. A machine dials random numbers within the phone prefixes corresponding to the area in which the survey is to be conducted. RDD is particularly useful when a sampling frame is not available. The researcher simply replaces any inappropriate numbers (e.g., those no longer in service or for businesses) with the next randomly generated phone number.

The probability of selection in a true simple random sample is equal for each element. If a sample of 500 is selected from a population of 17,000 (i.e., a sampling frame of 17,000), then the probability of selection for each element is 500/17,000, or.03. Every element has an equal and independent chance of being selected, just like the odds in a toss of a coin (1/2) or a roll of a die (1/6). Simple random sampling can be done either with or without replacement sampling. In replacement sampling, each element is returned to the sampling frame from which it is selected so that it may be sampled again. In sampling without replacement, each element selected for the sample is then excluded from the sampling frame. In practice, it makes no difference whether sampled elements are replaced after selection, as long as the population is large and the sample is to contain only a small fraction of the population.

Replacement sampling is a method of sampling in which sample elements are returned to the sampling frame after being selected, so they may be sampled again. Random samples may be selected with or without replacement.

In the CDC’s NISVS study mentioned above, noninstitutionalized (e.g., not in nursing homes, prisons, and so on) English- and/or Spanish-speaking residents aged 18 and older were randomly selected through an RDD sampling method in 2010. A total of 9,970 women and 8,079 men were selected, approximately 45% of the interviews were conducted by landline and 55% by cell phone. The final sample represented the U.S. population very well. For example, the proportion of the sample by gender, race/ethnicity, and age in the NISVS sample was very close to the sample proportions for the U.S. population as a whole.

Systematic random sampling is a variant of simple random sampling and is a little less time-consuming. When you systematically select a random sample, the first element is selected randomly from a list or from sequential files, and then every nth element is systematically selected thereafter. This is a convenient method for drawing a random sample when the population elements are arranged sequentially. It is particularly efficient when the elements are not actually printed (i.e., there is no sampling frame) but instead are represented by folders in filing cabinets.

Systematic random sampling requires three steps:

1. The total number of cases in the population is divided by the number of cases required for the sample. This division yields the sampling interval, the number of cases from one sampled case to another. If 50 cases are to be selected out of 1,000, the sampling interval is 20 (1,000/50 = 20); every 20th case is selected.

2. A number from 1 to 20 (the sampling interval) is selected randomly. This number identifies the first case to be sampled, counting from the first case on the list or in the files.

3. After the first case is selected, every nth case is selected for the sample, where n is the sampling interval. If the sampling interval is not a whole number, the size of the sampling interval is systematically varied to yield the proper number of cases for the sample. For example, if the sampling interval is 30.5, the sampling interval alternates between 30 and 31.

In almost all sampling situations, systematic random sampling yields what is essentially a simple random sample. The exception is a situation in which the sequence of elements is affected by periodicity—that is, the sequence varies in some regular, periodic pattern. The list or folder device from which the elements are selected must be truly random in order to avoid sampling bias. For example, we could not have a list of convicted felons sorted by offense type, age, or some other characteristic of the population. If the list is sorted in any meaningful way, this will introduce bias to the sampling process, and the resulting sample is not likely to be representative of the population.

Systematic random sampling is a method of sampling in which sample elements are selected from a list or from sequential files, with every nth element being selected after the first element is selected randomly within the first interval.

Sampling interval is the number of cases from one sampled case to another in a systematic random sample.

Periodicity is a sequence of elements (in a list to be sampled) that varies in some regular, periodic pattern.

Although all probability sampling methods use random sampling, some add steps to the process to make sampling more efficient or easier. Samples are easier to collect when they require less time, money, or prior information.

Stratified random sampling uses information known about the total population prior to sampling to make the sampling process more efficient. First, all elements in the population (i.e., in the sampling frame) are differentiated on the basis of their value on some relevant characteristic. This sorting step forms the sampling strata. Next, elements are sampled randomly from within these strata. For example, race may be the basis for distinguishing individuals in some population of interest. Within each racial category selected for the strata, individuals are then sampled randomly.

Why is this method more efficient than drawing a simple random sample? Well, imagine that you plan to draw a sample of 500 from an ethnically diverse neighborhood. The neighborhood population is 15% African American, 10% Hispanic, 5% Asian, and 70% Caucasian. If you drew a simple random sample, you might end up with disproportionate numbers of each group. But if you created sampling strata based on race and ethnicity, you could randomly select cases from each stratum: 75 African Americans (15% of the sample), 50 Hispanics (10%), 25 Asians (5%), and 350 Caucasians (70%). By using proportionate stratified sampling, you would eliminate any possibility of error in the sample’s distribution of ethnicity. Each stratum would be represented exactly in proportion to its size in the population from which the sample was drawn.

In disproportionate stratified sampling, the proportion of each stratum that is included in the sample is intentionally varied from what it is in the population. In the case of the sample stratified by ethnicity, you might select equal numbers of cases from each racial or ethnic group: 125 African Americans (25% of the sample), 125 Hispanics (25%), 125 Asians (25%), and 125 Caucasians (25%). In this type of sample, the probability of selection of every case is known but unequal between strata. You know what the proportions are in the population, so you can easily adjust your combined sample accordingly. For instance, if you want to combine the ethnic groups and estimate the average income of the total population, you would have to weight each case in the sample. The weight is a number you multiply by the value of each case based on the stratum it is in. For example, you would multiply the incomes of all African Americans in the sample by 0.6 (75/125), the incomes of all Hispanics by 0.4 (50/125), and so on. Weighting in this way reduces the influence of the oversampled strata and increases the influence of the undersampled strata to just what they would have been if pure probability sampling had been used.

Stratified random sampling is a method of sampling in which sample elements are selected separately from population strata that are identified in advance by the researcher.

Proportionate stratified sampling are Sampling methods in which elements are selected from strata in exact proportion to their representation in the population.

Disproportionate stratified sampling is sampling in which elements are selected from strata in different proportions from those that appear in the population.

Why would anyone select a sample that is so unrepresentative in the first place? The most common reason is to ensure that cases from smaller strata are included in the sample in sufficient numbers. Only then can separate statistical estimates and comparisons be made between strata (e.g., between African Americans and Caucasians). Remember that one determinant of sample quality is sample size. If few members of a particular group are in the population, they need to be oversampled. Such disproportionate sampling may also result in a more efficient sampling design if the costs of data collection differ markedly between strata or if the variability (heterogeneity) of the strata differs.

Although stratified sampling requires more information than usual prior to sampling (about the size of strata in the population), multistage cluster sampling requires less prior information. Specifically, cluster sampling can be useful when a sampling frame is not available, as is often the case for large populations spread across a wide geographic area or among many different organizations. In fact, if we wanted to obtain a sample from the entire U.S. population, there would be no list available. Yes, there are lists in telephone books of residents in various places who have telephones, lists of those who have registered to vote, lists of those who hold driver’s licenses, and so on. However, all these lists are incomplete: Some people do not list their phone number or do not have a telephone, some people are not registered to vote, and so on. Using incomplete lists such as these would introduce selection bias into our sample.

In such cases, the sampling procedures become a little more complex, and we usually end up working toward the sample we want through a series of steps or stages (hence the name multistage!): First, researchers extract a random sample of groups or clusters of elements that are available and then randomly sample the individual elements of interest from within these selected clusters. So what is a cluster? A cluster is a naturally occurring, mixed aggregate of elements of the population, with each element appearing in one and only one cluster. Schools could serve as clusters for sampling students, blocks could serve as clusters for sampling city residents, counties could serve as clusters for sampling the general population, and businesses could serve as clusters for sampling employees.

Multistage cluster sampling is sampling in which elements are selected in two or more stages, with the first stage being the random selection of naturally occurring clusters and the last stage being the random selection of multilevel elements within clusters

Cluster is a naturally occurring, mixed aggregate of elements of the population

Drawing a cluster sample is at least a two-stage procedure. First, the researcher draws a random sample of clusters. A list of clusters should be much easier to obtain than a list of all the individuals in each cluster in the population. Next, the researcher draws a random sample of elements within each selected cluster. Because only a fraction of the total clusters are involved, obtaining the sampling frame at this stage should be much easier.

In a cluster sample of city residents, for example, blocks could be the first-stage clusters. A research assistant could walk around each selected block and record the addresses of all occupied dwelling units. Or, in a cluster sample of students, a researcher could contact the schools selected in the first stage and make arrangements with the registrars or office staff to obtain lists of students at each school. Cluster samples often involve multiple stages (see Exhibit 5.5).

Many federal government–funded surveys use multistage cluster samples or even combinations of cluster and sratified probability sampling methods. The U.S. Justice Department’s National Crime Victimization Survey (NCVS) is an excellent example of a cluster sample. In the NCVS, the first stage of clusters selected are referred to as primary sampling units (PSUs) and represent a sample of rural counties and large metropolitan areas. The second stage of sampling involves the selection of geographic districts within each of the PSUs that have been listed by the U.S. Census Bureau population census. Finally, a probability sample of residential dwelling units is selected from these geographic districts. These dwelling units, or addresses, represent the last stage of the multistage sampling. Anyone who resides at a selected address who is 12 years of age or older and is a U.S. citizen is eligible for the NCVS sample. Approximately 50,500 housing units or other living quarters are designated for the NCVS each year and are selected in this manner.

How would we evaluate the NCVS sample, using the sample evaluation questions?

  • From what population were the cases selected? The population was clearly defined for each cluster.
  • What method was used to select cases from this population? The random selection method was carefully described.
  • Do the cases that were studied represent, in the aggregate, the population from which they were selected? The unbiased selection procedures make us reasonably confident in the representativeness of the sample.

LOC: Probability Sampling Methods

TIP: Simple Random Sampling

[LO 3]

COG [Knowledge]

DIF [Medium]

  1. Define and explain the major types of non-probability sampling.

There are four common nonprobability sampling methods: (1) availability sampling, (2) quota sampling, (3) purposive sampling, and (4) snowball sampling. Because these methods do not use a random selection procedure, we cannot expect a sample selected with any of these methods to yield a representative sample. They should not be used in quantitative studies if a probability-based method is feasible. Nonetheless, these methods are useful when random sampling is not possible, when a research question calls for an intensive investigation of a small population, or when a researcher is performing a preliminary, exploratory study.

In availability sampling, elements are selected because they are available or easy to find. Consequently, this sampling method is also known as haphazard, accidental, or convenience sampling. As noted earlier, news reporters often use passersby—availability samples—to inject a personal perspective into a news story and show what ordinary people may think of a given topic. Availability samples are also used by university professors and researchers all the time. Have you ever been asked to complete a questionnaire before leaving one of your classes? If so, you may have been selected for inclusion in an availability sample.

Availability sampling is sampling in which elements are selected on the basis of convenience.

Even though they are not generalizable, availability samples are often appropriate in research—for example, when a field researcher is exploring a new setting and trying to get some sense of prevailing attitudes or when a survey researcher conducts a preliminary test of a questionnaire. There are a variety of ways to select elements for an availability sample: standing on street corners and talking to anyone walking by, asking questions of employees who come to pick up their paychecks at a personnel office, or distributing questionnaires to an available and captive audience such as a class or a group meeting. Availability samples are also frequently used in fieldwork studies when the researchers are interested in obtaining detailed information about a particular group. When such samples are used, it is necessary to explicitly describe the sampling procedures used in the methodology section of research reports to acknowledge the nonrepresentativeness of the sample. For example, in a study investigating the prevalence problem behavior in a sample of students pursuing policing careers, Gray (2011) stated,

[A] convenience/purposive sample was used to survey students attending a medium-sized public, Midwestern university... to determine if differences existed between students majoring in criminal justice (CJ) and students with other majors in terms of deviance and delinquency, drinking and drug use, and an array of other behaviors. (p. 544)

Those of you studying for a policing career will be interested to know that over one quarter of the CJ majors had engaged in serious forms of problematic behavior such as marijuana use. Students who engage in these forms of illegal activities, Gray points out, “should expect to have some level of difficulty with police application and hiring processes” (p. 549). But we digress.

How does the generalizability of survey responses from an availability sample compare to those obtained from probability samples? The difference is that in an availability sample, there is no clearly definable population from which the respondents were drawn, and no systematic technique was used to select the respondents. Consequently, there is not much likelihood that the sample is representative of any target population; the problem is that we can never be sure. Unfortunately, availability sampling often masquerades as a more rigorous form of research. Much like CNN’s use of polling results, noted earlier in the chapter, popular magazines and Internet sites frequently survey their readers by asking them to fill out questionnaires. Follow-up articles then appear in the magazine or on the site, displaying the results under such titles as “What You Think about the Death Penalty for Teenagers. “If the magazine’s circulation is extensive, a large sample can be achieved in this way. The problem is that usually only a tiny fraction of readers fill out the questionnaire, and these respondents are probably unlike other readers who did not have the interest or time to participate. So the survey is based on an availability sample. Even though the follow-up article may be interesting, we have no basis for thinking that the results describe the readership as a whole, much less the larger population. Internet sites that conduct such polls now add a disclaimer similar to this to the online poll’s question of the day: “Not a scientific poll; for entertainment only.”

Quota sampling is intended to overcome availability sampling’s biggest downfall: the likelihood that the sample will just consist of who or what is available, without any concern for its similarity to the population of interest. The distinguishing feature of a quota sample is that quotas are set to ensure that the sample represents certain characteristics in proportion to their prevalence in the population.

Quota sampling is a nonprobability sampling method in which elements are selected to ensure that the sample represents certain characteristics in proportion to their prevalence in the population.

Quota samples are similar to stratified probability samples, but they are generally less rigorous and precise in their selection procedures. Quota sampling simply involves designating the population into proportions of some group that you want to be represented in your sample. Similar to stratified samples, in some cases, these proportions may actually represent the true proportions observed in the population. At other times, these quotas may represent predetermined proportions of subsets of people you deliberately want to oversample.

The problem is that even when we know that a quota sample is representative of the particular characteristics for which quotas have been set, we have no way of knowing if the sample is representative in terms of any other characteristics. In Exhibit 5.6, for example, quotas have been set for gender only. Under the circumstances, it’s no surprise that the sample is representative of the population only in terms of gender, not in terms of race. Interviewers are only human and guided by their own biases; they may avoid potential respondents with menacing dogs in the front yard, or they could seek out respondents who are physically attractive or who look like they would be easy to interview. Realistically, researchers can set quotas for only a small fraction of the characteristics relevant to a study, so a quota sample is really not much better than an availability sample (although following careful, consistent procedures for selecting cases within the quota limits always helps).

This last point leads to another limitation of quota sampling: You must know the characteristics of the entire population to set the right quotas. In most cases, researchers know what the population looks like in terms of no more than a few of the characteristics relevant to their concerns, and in some cases, they have no such information on the entire population. Does quota sampling remind you of stratified sampling? It’s easy to understand why because they both select sample members partly on the basis of one or more key characteristics. The key difference is quota sampling’s lack of random selection.

In purposive sampling, each sample element is selected for a purpose, usually because of the unique position of the sample elements. It is sometimes referred to as judgment sampling, because the researcher uses his or her own judgment about whom to select into the sample rather than drawing sample elements randomly. Purposive sampling may involve studying the entire population of some limited group (members of a street gang) or a subset of a population (juvenile parolees). A purposive sample may also be a key informant survey, which targets individuals who are particularly knowledgeable about the issues under investigation.

Purposive sampling is a nonprobability sampling method in which elements are selected for a purpose, usually because of their unique position. Sometimes referred to as judgment sampling.

Rubin and Rubin (1995, p. 66) suggest three guidelines for selecting informants when designing any purposive sampling strategy. Informants should be

  • about the cultural arena or situation or experience being studied,
  • willing to talk, and
  • representative of the range of points of view.

In addition, Rubin and Rubin (1995) suggest continuing to select interviewees until you can pass two tests:

  • Completeness. “What you hear provides an overall sense of the meaning of a concept, theme, or process” (p. 72).
  • Saturation. “You gain confidence that you are learning little that is new from subsequent interview[s]” (p. 73).

Adhering to these guidelines will help ensure that a purposive sample adequately represents the setting or issues being studied. Purposive sampling does not produce a sample that represents some larger population, but it can be exactly what is needed in a case study of an organization, community, or some other clearly defined and relatively limited group. For example, in their classic book Crimes of the Middle Class, Weisburd, Wheeler, Waring, and Bode (1991) examined a sample of white-collar criminal offenders convicted in seven federal judicial districts. These judicial districts were not randomly selected from an exhaustive list of all federal districts but were instead deliberately selected by the researchers because they were thought to provide a suitable amount of geographical diversity. They were also selected because they were believed to have a substantial proportion of white-collar crime cases. The cost of such nonprobability sampling, you should realize by now, is generalizability; we do not know if their findings hold true for white-collar crime in other areas of the country.

For snowball sampling, you identify one member of the population and speak to him or her, then ask that person to identify others in the population and speak to them, then ask them to identify others, and so on. The sample size thus increases with time as a snowball would, rolling down a slope. This technique is useful for hard-to-reach or hard-to-identify interconnected populations where at least some members of the population know each other, such as drug dealers, prostitutes, practicing criminals, gang leaders, and informal organizational leaders.

Snowball sampling is a method of sampling in which sample elements are selected as they are identified by successive informants or interviewees.

St. Jean (2007) used snowball sampling for recruiting offenders in a Chicago neighborhood for interviews. After several years of participant observation (see Chapter 9) within a Chicago community, St. Jean wanted to understand the logic offenders used for setting up street drug dealing and staging robberies. He explained his sampling technique as follows:

I was introduced to the offenders mainly through referrals from relatives, customers, friends, and acquaintances who, after several months (sometimes years), trusted me as someone whose only motive was to understand life in their neighborhood. For instance, the first three drug dealers I interviewed were introduced by their close relatives. Toward the end of each interview, I asked for leads to other subjects, with the first three interviews resulting in eleven additional leads. (p. 26)

One problem with this technique is that the initial contacts may shape the entire sample and foreclose access to some members of the population of interest. Because Decker and Van Winkle (1996) wanted to interview members from several gangs, they had to restart the snowball sampling procedure many times to gain access to a large number of gangs. One problem, of course, was validating whether individuals claiming to be gang members—so-called wannabes—actually were legitimate members. Over 500 contacts were made before the final sample of 99 was complete.

More systematic versions of snowball sampling can also reduce the potential for bias. The most sophisticated version, respondent-driven sampling, gives financial incentives, also called gratuities, to respondents to recruit peers (Heckathorn, 1997). Limitations on the number of incentives that any one respondent can receive increase the sample’s diversity. Targeted incentives can steer the sample to include specific subgroups. When the sampling is repeated through several waves, with new respondents bringing in more peers, the composition of the sample converges on a more representative mix of characteristics. Exhibit 5.7 shows how the sample spreads out through successive recruitment waves to an increasingly diverse pool (Heckathorn, 1997). As with all nonprobability sampling techniques, however, researchers using even the most systematic versions of snowball sampling cannot be confident that their sample is representative of the population of interest.

LOC: Sampling Methods

TIP: Nonprobability Sampling Methods

[LO3]

COG [Knowledge]

DIF [Medium]

  1. When would one use probability sampling? Explain and indicate when each type would be preferred.

Simple Random Sampling: Use random sampling when the researcher has a list of all elements in the population, numbers the elements in the sampling frame, and uses systematic procedure for picking corresponding numbers from a random number table to choose sample.

Systematic Random Sampling: Variant of simple random sampling but is little less time consuming. From the list of population members in the sampling frame, choose first element randomly using random numbers table then select every nth element thereafter.

Stratified Random Sampling: All elements in population (sampling frame) are differentiated on the basis of their value on some relevant characteristic to form strata. Next, elements sampled randomly from each strata. For example, class membership may be basis for distinguishing individuals in population of interest and within each category individuals are sampled randomly (EX: differentiate XYZ University Students by class membership: Freshmen, Sophomores, Juniors, Seniors) then pull random sample of students from each layer)

Multistage Cluster Sampling: Multistage cluster sampling can be useful when a sampling frame is not available, as is often the case for large populations spread across a wide geographic area or among many different organizations. In fact, if we wanted to obtain a sample from the entire U.S. population, there would be no list available. Yes, there are lists in telephone books of residents in various places who have telephones, lists of those who have registered to vote, lists of those who hold driver’s licenses, and so on. However, all these lists are incomplete: Some people do not list their phone number or do not have a telephone, some people are not registered to vote, and so on. Using incomplete lists such as these would introduce selection bias into our sample.

In such cases, the sampling procedures become a little more complex, and we usually end up working toward the sample we want through a series of steps or stages (hence the name multistage!): First, researchers extract a random sample of groups or clusters of elements that are available and then randomly sample the individual elements of interest from within these selected clusters. So what is a cluster? A cluster is a naturally occurring, mixed aggregate of elements of the population, with each element appearing in one and only one cluster. Schools could serve as clusters for sampling students, blocks could serve as clusters for sampling city residents, counties could serve as clusters for sampling the general population, and businesses could serve as clusters for sampling employees.

LOC: Sampling Methods

TIP: Probability Sampling Methods

[LO 4]

COG [Evaluation]

DIF [Medium]

  1. When would one use non-probability sampling? ? Explain and indicate when each type would be preferred.

There are four common nonprobability sampling methods: (1) availability sampling, (2) quota sampling, (3) purposive sampling, and (4) snowball sampling. Because these methods do not use a random selection procedure, we cannot expect a sample selected with any of these methods to yield a representative sample. They should not be used in quantitative studies if a probability-based method is feasible. Nonetheless, these methods are useful when random sampling is not possible, when a research question calls for an intensive investigation of a small population, or when a researcher is performing a preliminary, exploratory study.

In availability sampling, elements are selected because they are available or easy to find. Consequently, this sampling method is also known as haphazard, accidental, or convenience sampling. As noted earlier, news reporters often use passersby—availability samples—to inject a personal perspective into a news story and show what ordinary people may think of a given topic. Availability samples are also used by university professors and researchers all the time. Have you ever been asked to complete a questionnaire before leaving one of your classes? If so, you may have been selected for inclusion in an availability sample.

Availability sampling is sampling in which elements are selected on the basis of convenience.

Even though they are not generalizable, availability samples are often appropriate in research—for example, when a field researcher is exploring a new setting and trying to get some sense of prevailing attitudes or when a survey researcher conducts a preliminary test of a questionnaire. There are a variety of ways to select elements for an availability sample: standing on street corners and talking to anyone walking by, asking questions of employees who come to pick up their paychecks at a personnel office, or distributing questionnaires to an available and captive audience such as a class or a group meeting. Availability samples are also frequently used in fieldwork studies when the researchers are interested in obtaining detailed information about a particular group. When such samples are used, it is necessary to explicitly describe the sampling procedures used in the methodology section of research reports to acknowledge the nonrepresentativeness of the sample. For example, in a study investigating the prevalence problem behavior in a sample of students pursuing policing careers, Gray (2011) stated,

[A] convenience/purposive sample was used to survey students attending a medium-sized public, Midwestern university... to determine if differences existed between students majoring in criminal justice (CJ) and students with other majors in terms of deviance and delinquency, drinking and drug use, and an array of other behaviors. (p. 544)

Those of you studying for a policing career will be interested to know that over one quarter of the CJ majors had engaged in serious forms of problematic behavior such as marijuana use. Students who engage in these forms of illegal activities, Gray points out, “should expect to have some level of difficulty with police application and hiring processes” (p. 549). But we digress.

How does the generalizability of survey responses from an availability sample compare to those obtained from probability samples? The difference is that in an availability sample, there is no clearly definable population from which the respondents were drawn, and no systematic technique was used to select the respondents. Consequently, there is not much likelihood that the sample is representative of any target population; the problem is that we can never be sure. Unfortunately, availability sampling often masquerades as a more rigorous form of research. Much like CNN’s use of polling results, noted earlier in the chapter, popular magazines and Internet sites frequently survey their readers by asking them to fill out questionnaires. Follow-up articles then appear in the magazine or on the site, displaying the results under such titles as “What You Think about the Death Penalty for Teenagers. “If the magazine’s circulation is extensive, a large sample can be achieved in this way. The problem is that usually only a tiny fraction of readers fill out the questionnaire, and these respondents are probably unlike other readers who did not have the interest or time to participate. So the survey is based on an availability sample. Even though the follow-up article may be interesting, we have no basis for thinking that the results describe the readership as a whole, much less the larger population. Internet sites that conduct such polls now add a disclaimer similar to this to the online poll’s question of the day: “Not a scientific poll; for entertainment only.”

Quota sampling is intended to overcome availability sampling’s biggest downfall: the likelihood that the sample will just consist of who or what is available, without any concern for its similarity to the population of interest. The distinguishing feature of a quota sample is that quotas are set to ensure that the sample represents certain characteristics in proportion to their prevalence in the population.

Quota sampling is a nonprobability sampling method in which elements are selected to ensure that the sample represents certain characteristics in proportion to their prevalence in the population.

Quota samples are similar to stratified probability samples, but they are generally less rigorous and precise in their selection procedures. Quota sampling simply involves designating the population into proportions of some group that you want to be represented in your sample. Similar to stratified samples, in some cases, these proportions may actually represent the true proportions observed in the population. At other times, these quotas may represent predetermined proportions of subsets of people you deliberately want to oversample.

The problem is that even when we know that a quota sample is representative of the particular characteristics for which quotas have been set, we have no way of knowing if the sample is representative in terms of any other characteristics. In Exhibit 5.6, for example, quotas have been set for gender only. Under the circumstances, it’s no surprise that the sample is representative of the population only in terms of gender, not in terms of race. Interviewers are only human and guided by their own biases; they may avoid potential respondents with menacing dogs in the front yard, or they could seek out respondents who are physically attractive or who look like they would be easy to interview. Realistically, researchers can set quotas for only a small fraction of the characteristics relevant to a study, so a quota sample is really not much better than an availability sample (although following careful, consistent procedures for selecting cases within the quota limits always helps).

This last point leads to another limitation of quota sampling: You must know the characteristics of the entire population to set the right quotas. In most cases, researchers know what the population looks like in terms of no more than a few of the characteristics relevant to their concerns, and in some cases, they have no such information on the entire population. Does quota sampling remind you of stratified sampling? It’s easy to understand why because they both select sample members partly on the basis of one or more key characteristics. The key difference is quota sampling’s lack of random selection.

In purposive sampling, each sample element is selected for a purpose, usually because of the unique position of the sample elements. It is sometimes referred to as judgment sampling, because the researcher uses his or her own judgment about whom to select into the sample rather than drawing sample elements randomly. Purposive sampling may involve studying the entire population of some limited group (members of a street gang) or a subset of a population (juvenile parolees). A purposive sample may also be a key informant survey, which targets individuals who are particularly knowledgeable about the issues under investigation.

Purposive sampling is a nonprobability sampling method in which elements are selected for a purpose, usually because of their unique position. Sometimes referred to as judgment sampling.

Rubin and Rubin (1995, p. 66) suggest three guidelines for selecting informants when designing any purposive sampling strategy. Informants should be

  • about the cultural arena or situation or experience being studied,
  • willing to talk, and
  • representative of the range of points of view.

In addition, Rubin and Rubin (1995) suggest continuing to select interviewees until you can pass two tests:

  • Completeness. “What you hear provides an overall sense of the meaning of a concept, theme, or process” (p. 72).
  • Saturation. “You gain confidence that you are learning little that is new from subsequent interview[s]” (p. 73).

Adhering to these guidelines will help ensure that a purposive sample adequately represents the setting or issues being studied. Purposive sampling does not produce a sample that represents some larger population, but it can be exactly what is needed in a case study of an organization, community, or some other clearly defined and relatively limited group. For example, in their classic book Crimes of the Middle Class, Weisburd, Wheeler, Waring, and Bode (1991) examined a sample of white-collar criminal offenders convicted in seven federal judicial districts. These judicial districts were not randomly selected from an exhaustive list of all federal districts but were instead deliberately selected by the researchers because they were thought to provide a suitable amount of geographical diversity. They were also selected because they were believed to have a substantial proportion of white-collar crime cases. The cost of such nonprobability sampling, you should realize by now, is generalizability; we do not know if their findings hold true for white-collar crime in other areas of the country.

For snowball sampling, you identify one member of the population and speak to him or her, then ask that person to identify others in the population and speak to them, then ask them to identify others, and so on. The sample size thus increases with time as a snowball would, rolling down a slope. This technique is useful for hard-to-reach or hard-to-identify interconnected populations where at least some members of the population know each other, such as drug dealers, prostitutes, practicing criminals, gang leaders, and informal organizational leaders.

Snowball sampling is a method of sampling in which sample elements are selected as they are identified by successive informants or interviewees.

St. Jean (2007) used snowball sampling for recruiting offenders in a Chicago neighborhood for interviews. After several years of participant observation (see Chapter 9) within a Chicago community, St. Jean wanted to understand the logic offenders used for setting up street drug dealing and staging robberies. He explained his sampling technique as follows:

I was introduced to the offenders mainly through referrals from relatives, customers, friends, and acquaintances who, after several months (sometimes years), trusted me as someone whose only motive was to understand life in their neighborhood. For instance, the first three drug dealers I interviewed were introduced by their close relatives. Toward the end of each interview, I asked for leads to other subjects, with the first three interviews resulting in eleven additional leads. (p. 26)

One problem with this technique is that the initial contacts may shape the entire sample and foreclose access to some members of the population of interest. Because Decker and Van Winkle (1996) wanted to interview members from several gangs, they had to restart the snowball sampling procedure many times to gain access to a large number of gangs. One problem, of course, was validating whether individuals claiming to be gang members—so-called wannabes—actually were legitimate members. Over 500 contacts were made before the final sample of 99 was complete.

More systematic versions of snowball sampling can also reduce the potential for bias. The most sophisticated version, respondent-driven sampling, gives financial incentives, also called gratuities, to respondents to recruit peers (Heckathorn, 1997). Limitations on the number of incentives that any one respondent can receive increase the sample’s diversity. Targeted incentives can steer the sample to include specific subgroups. When the sampling is repeated through several waves, with new respondents bringing in more peers, the composition of the sample converges on a more representative mix of characteristics. Exhibit 5.7 shows how the sample spreads out through successive recruitment waves to an increasingly diverse pool (Heckathorn, 1997). As with all nonprobability sampling techniques, however, researchers using even the most systematic versions of snowball sampling cannot be confident that their sample is representative of the population of interest.

LOC: Sampling Methods

TIP: Nonprobability Sampling Methods

[LO 5]

COG [Evaluation]

DIF [Medium]

  1. What are units of analysis? What errors can be made when generalizing from one unit of analysis to another? (5-18)

Researchers should make sure that their conclusions reflect the units of analysis in their study. For example, a conclusion that crime increases as unemployment increases could imply that individuals who lose their jobs are more likely to commit a crime, that a community with a high unemployment rate is also likely to have a high crime rate, or both. Conclusions about processes at the individual level should be based on individual-level data; conclusions about group-level processes should be based on data collected about groups. In most cases, violation of this rule creates one more reason to suspect the validity of the causal conclusions.

A researcher who draws conclusions about individual-level processes from group-level data is constructing an ecological fallacy (see Exhibit 5.8). The conclusions may or may not be correct, but we must recognize that group-level data do not describe individual-level processes. For example, a researcher may examine prison employee records and find that the higher the percentage of correctional workers without college education in prisons, the higher the rate of inmate complaints of brutality by officers in prisons. But the researcher would commit an ecological fallacy if she then concluded that individual correctional officers without a college education were more likely to engage in acts of brutality against inmates. This conclusion is about an individual-level causal process (the relationship between the education and criminal propensities of individuals), even though the data describe groups (prisons). It could actually be that college-educated officers are the ones more likely to commit acts of brutality. If more officers in prison are not college educated, perhaps the college-educated officers feel they would not be suspected.

Bear in mind that conclusions about individual processes based on group-level data are not necessarily wrong. The data simply do not provide information about processes at the individual level. Suppose we find that communities with higher average incomes have lower crime rates. The only thing special about these communities may be that they have more individuals with higher incomes who tend to commit fewer crimes. Even though we collected data at the group level and analyzed them at the group level, they reflect a causal process at the individual level (Sampson & Lauritsen, 1994, pp. 80–83).

Conversely, when data about individuals are used to make inferences about group-level processes, a problem occurs that can be thought of as the mirror image of the ecological fallacy: the reductionist fallacy, or reductionism also known as reductionism, or the individualist fallacy.

The solution to these problems is to know what the units of analysis and units of observation were in a study and to take these into account when weighing the credibility of the researcher’s conclusions. The goal is not to reject conclusions that refer to a level of analysis different from what was actually studied. Instead, the goal is to consider the likelihood that an ecological fallacy or a reductionist fallacy has been made when estimating the causal validity of the conclusions. The goal is not to reject conclusions that refer to a level of analysis different from what was actually studied. Instead, the goal is to consider the likelihood that an ecological fallacy or a reductionist fallacy has been made when estimating the causal validity of the conclusions.

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: Individual and Group Units of Analysis

[LO 6]

COG [Analysis]

DIF [Medium]

  1. Why might “person on the street” interviews, while interesting, not tell much about how the population of one city views police brutality? How can sampling assist?

Because they do not disproportionately select particular groups within the population, random samples that are successfully implemented avoid systematic bias. The Gallup Poll is a good example of the accuracy of random samples. For example, in 2012, the final Gallup prediction of 54% for Obama was within three percentage points of his winning total of 51%. The four most common methods for drawing random samples are simple random sampling, systematic random sampling, stratified random sampling, and multistage cluster sampling.

LOC: Sampling Methods

TIP: Probability Sampling Methods

[LO 3]

COG [Synthesis]

DIF [Medium]

  1. Explain how generalizability is evaluated. What are the two types of generalizability mentioned in the text? How do they differ and how are they similar? (5-3)

As noted in Chapter 2, generalizability has two aspects. Can the findings from a sample of the population be generalized to the population from which the sample was selected? Sample generalizability refers to the ability to generalize from a sample (subset) of a larger population to that population itself (e.g., using those Alaskan students’ survey results to speak more generally about rural students’ perceptions of fear). This is the most common meaning of generalizability. Can the findings from a study of one population be generalized to another, somewhat different population? This is Cross-population generalizability and refers to the ability to generalize from findings about one group, population, or setting to other groups, populations, or settings (see Exhibit 5.2). In this book, we use the term external validity to refer only to cross-population generalizability, not to sample generalizability.

LOC: Sample Planning

TIP: Evaluate Generalizability

[LO 3]

COG [Evaluation]

DIF [Medium]

  1. What is the difference between proportionate stratified random sampling and disproportionate stratified random sampling? Describe and distinguish each.

In disproportionate stratified sampling, the proportion of each stratum that is included in the sample is intentionally varied from what it is in the population. In the case of the sample stratified by ethnicity, you might select equal numbers of cases from each racial or ethnic group: 125 African Americans (25% of the sample), 125 Hispanics (25%), 125 Asians (25%), and 125 Caucasians (25%). In this type of sample, the probability of selection of every case is known but unequal between strata. You know what the proportions are in the population, so you can easily adjust your combined sample accordingly. For instance, if you want to combine the ethnic groups and estimate the average income of the total population, you would have to weight each case in the sample. The weight is a number you multiply by the value of each case based on the stratum it is in. For example, you would multiply the incomes of all African Americans in the sample by 0.6 (75/125), the incomes of all Hispanics by 0.4 (50/125), and so on. Weighting in this way reduces the influence of the oversampled strata and increases the influence of the undersampled strata to just what they would have been if pure probability sampling had been used.

LOC: Probability Sampling Methods

TIP: Stratified Random Thinking

[LO 4]

COG [Analysis]

DIF [Medium]

  1. Name and describe at least four ways to evaluate sample quality.
  • We cannot evaluate the quality of a sample if we do not know what population it is supposed to represent. If the population is unspecified because the researchers were never clear about just what population they were trying to sample, then we can safely conclude that the sample itself is no good.
  • We cannot evaluate the quality of a sample if we do not know exactly how cases in the sample were selected from the population. If the method was specified, we then need to know whether cases were selected in a systematic fashion or on the basis of chance. In any case, we know that a haphazard method of sampling (as in person-on-the-street interviews) undermines generalizability.
  • Sample quality is determined by the sample actually obtained, not just by the sampling method itself. That is, findings are only as generalizable as the sample from which they are drawn. If many of the people (or other elements) selected for our sample do not respond or participate in the study, even though they have been selected for the sample, generalizability is compromised.
  • We need to be aware that even researchers who obtain very good samples may talk about the implications of their findings for some group that is larger than or just different from the population they actually sampled. For example, findings from a representative sample of students in one university often are discussed as if they tell us about university students in general. Maybe they do; the problem is, we just don’t know.

LOC: Sampling Methods

TIP: Lessons About Sample Quality

[LO 2]

COG [Evaluation]

DIF [Hard]

  1. Identify and distinguish the Ecological Fallacy and Reductionism. (5-19)

A researcher who draws conclusions about individual-level processes from group-level data is constructing an ecological fallacy (see Exhibit 5.8). The conclusions may or may not be correct, but we must recognize that group-level data do not describe individual-level processes. For example, a researcher may examine prison employee records and find that the higher the percentage of correctional workers without college education in prisons, the higher the rate of inmate complaints of brutality by officers in prisons. But the researcher would commit an ecological fallacy if she then concluded that individual correctional officers without a college education were more likely to engage in acts of brutality against inmates. This conclusion is about an individual-level causal process (the relationship between the education and criminal propensities of individuals), even though the data describe groups (prisons). It could actually be that college-educated officers are the ones more likely to commit acts of brutality. If more officers in prison are not college educated, perhaps the college-educated officers feel they would not be suspected.

Bear in mind that conclusions about individual processes based on group-level data are not necessarily wrong. The data simply do not provide information about processes at the individual level. Suppose we find that communities with higher average incomes have lower crime rates. The only thing special about these communities may be that they have more individuals with higher incomes who tend to commit fewer crimes. Even though we collected data at the group level and analyzed them at the group level, they reflect a causal process at the individual level (Sampson & Lauritsen, 1994, pp. 80–83).

Conversely, when data about individuals are used to make inferences about group-level processes, a problem occurs that can be thought of as the mirror image of the ecological fallacy: the reductionist fallacy, or reductionism also known as reductionism, or the individualist fallacy (see Exhibit 5.8).

The solution to these problems is to know what the units of analysis and units of observation were in a study and to take these into account when weighing the credibility of the researcher’s conclusions. The goal is not to reject conclusions that refer to a level of analysis different from what was actually studied. Instead, the goal is to consider the likelihood that an ecological fallacy or a reductionist fallacy has been made when estimating the causal validity of the conclusions. The goal is not to reject conclusions that refer to a level of analysis different from what was actually studied. Instead, the goal is to consider the likelihood that an ecological fallacy or a reductionist fallacy has been made when estimating the causal validity of the conclusions.

LOC: Units of Analysis and Errors in Causal Reasoning

TIP: The Ecological Fallacy and Reductionism

[LO 6]

COG [Synthesis]

DIF [Hard]

  1. Explain how to evaluate the quality of a sample. (5-17)
  • We cannot evaluate the quality of a sample if we do not know what population it is supposed to represent. If the population is unspecified because the researchers were never clear about just what population they were trying to sample, then we can safely conclude that the sample itself is no good.
  • We cannot evaluate the quality of a sample if we do not know exactly how cases in the sample were selected from the population. If the method was specified, we then need to know whether cases were selected in a systematic fashion or on the basis of chance. In any case, we know that a haphazard method of sampling (as in person-on-the-street interviews) undermines generalizability.
  • Sample quality is determined by the sample actually obtained, not just by the sampling method itself. That is, findings are only as generalizable as the sample from which they are drawn. If many of the people (or other elements) selected for our sample do not respond or participate in the study, even though they have been selected for the sample, generalizability is compromised.
  • We need to be aware that even researchers who obtain very good samples may talk about the implications of their findings for some group that is larger than or just different from the population they actually sampled. For example, findings from a representative sample of students in one university often are discussed as if they tell us about university students in general. Maybe they do; the problem is, we just don’t know.

LOC: Probability Sampling Methods

TIP: Lessons About Sample Quality

[LO 2]

COG [Evaluate]

DIF [Medium]

  1. How does nonresponse impact a research study? Will the findings from a randomly selected sample be generalizable to the population from which it was selected if there is a large nonresponse rate? Why or why not?

Even though a random sample has no systematic bias, it certainly will have some sampling error due to chance. The probability of selecting a head is.5 in a single toss of a coin, and in 20, 30, or however many tosses of a coin you like. Be aware, however, that it is perfectly possible to toss a coin twice and get a head both times. The random sample of the two sides of the coin is selected in an unbiased fashion, but it still is unrepresentative. Imagine randomly selecting a sample of 10 people from a population comprising 50 men and 50 women. Just by chance, it is possible that your sample of 10 people will include seven women and only three men. Fortunately, we can determine mathematically the likely degree of sampling error in an estimate based on a random sample (as you will see later in this chapter), assuming that the sample’s randomness has not been destroyed by a high rate of nonresponse or by poor control over the selection process.

LOC: Sampling Methods

TIP: Probability Sampling Methods

[LO 3]

COG [Analysis]

DIF [Hard]

Document Information

Document Type:
DOCX
Chapter Number:
2
Created Date:
Aug 21, 2025
Chapter Name:
Chapter 2 The Process And Problems Of Criminological Research
Author:
Ronet D. Bachman

Connected Book

Criminology Research 4e | Test Bank by Ronet D. Bachman

By Ronet D. Bachman

Test Bank General
View Product →

$24.99

100% satisfaction guarantee

Buy Full Test Bank

Benefits

Immediately available after payment
Answers are available after payment
ZIP file includes all related files
Files are in Word format (DOCX)
Check the description to see the contents of each ZIP file
We do not share your information with any third party