Chapter 6 Causation And Experimentation Test Bank Docx - Criminology Research 4e | Test Bank by Ronet D. Bachman by Ronet D. Bachman. DOCX document preview.

Chapter 6 Causation And Experimentation Test Bank Docx

Chapter 9: Analyzing Content

Test Bank

MULTIPLE CHOICE

  1. The analysis of data that were originally collected by someone else at another time is known as (9-1)
  2. Primary data collection
  3. Stolen data collection
  4. Secondary data analysis
  5. Public domain information

ANS [C]

LOC: What Are Secondary Data?

TIP: What Are Secondary Data?

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. A major type of secondary data is (are) (9-2)
  2. Surveys
  3. Official records
  4. Both A and B
  5. None of the above

ANS [C]

LOC: Analyzing Content

TIP: What Are Secondary Data?

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. The National Archive of Criminal Justice Data provides (9-2)
  2. More than 1,000 criminal justice data collections to private researchers
  3. 600+ criminal justice data collections to the public
  4. The FBI Uniform Crime Reporting program data
  5. Both B and C

ANS [D]

LOC: What Are Secondary Data?

TIP: What Are Secondary Data?

[LO 2]

COG [Comprehension]

DIF [Easy]

  1. The most unique source of qualitative data available for researchers in the U.S. is (9-3)
  2. The ICPSR
  3. The NCVS
  4. The National Jail Census
  5. The Human Relations Area Files (HRAF) at Yale University

ANS [D]

LOC: What Are Secondary Data?

TIP: Qualitative data sources

[LO 2]

COG [Application]

DIF [Medium]

  1. Research that seeks to understand the structure, nature, or scope of a nation’s or nations’ criminal justice system is (9-4)
  2. Exploratory research
  3. Descriptive comparative research
  4. Explanatory comparative research
  5. Analytic Comparative research

ANS [B]

LOC: What Are Secondary Data?

TIP: Comparative Methods

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. The systematic, objective, quantitative analysis of message characteristics is (9-6)
  2. Construct analysis
  3. Comparative analysis
  4. Content analysis
  5. None of the above

ANS [C]

LOC: Comparative Methods

TIP: Content Analysis

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. The goal of content analysis is to develop (9-6)
  2. Instincts from text
  3. Identifying instincts from a population
  4. Inferences from text
  5. A sample of units from the population

ANS [C]

LOC: What Are Secondary Data?

TIP: Content Analysis

[LO 2]

COG [Comprehension]

DIF [Easy]

  1. Crime mapping is generally used to identify the (9-8)
  2. General distribution of social disorganization in certain mapped areas
  3. Spatial distribution of crime along social indicators such as poverty and social disorganization
  4. Location of crime analyses but not to communicate results
  5. All of the above

ANS [B]

LOC: Content Analysis

TIP: Crime Mapping

[LO 4]

COG [Comprehension]

DIF [Eas4]

  1. Shaw and McKay conducted a landmark analysis in criminology on (9-9)
  2. Adult crime in New York neighborhoods
  3. Juvenile delinquency in Chicago neighborhoods
  4. Adult crime in Chicago neighborhoods
  5. Juvenile delinquency in New York neighborhoods

ANS [B]

LOC: Crime Mapping

TIP: Case Study: Mapping Crime in Cities

[LO 4]

COG [Comprehension]

DIF [Medium]

  1. Rosenfeld, Bray and Egley (1999) studied whether gang membership (9-9)
  2. Pushed members to engage in violence or merely exposed them to violent persons and situations
  3. Kept members from engaging in violence by not exposing them to violent persons
  4. Encouraged members to engage in a small amount of gang violence
  5. Encouraged members to avoid gang-motivated research in general

ANS [A]

LOC: Crime Mapping

TIP: Case Study: Gang Homicides St. Louis, Missouri

[LO 4]

COG [Application]

DIF [Easy]

  1. A very large dataset that is accessible in computer-readable form that is used to reveal patterns, trends, and associations between variables is known as (9-11)
  2. Mega Data
  3. Big Data
  4. Large Datasets
  5. All of these are interchangeable (mean the same thing)

ANS [B]

LOC: Crime Mapping

TIP: Big Data

[LO 4]

COG [Comprehension]

DIF [Easy]

  1. A Risk Terrain Model (RTM) (9-12)
  2. Uses data from a single, very large source to predict the probability of crime occurring in an area
  3. Uses data from several sources to predict the probability of crime occurring in the future
  4. Uses mapping software to help law enforcement predict where criminals will be arrested
  5. Uses data mapping to assess an area’s groundcover

ANS [B]

LOC: Big Data

TIP: Case Study: Predicting Where Crime Will Occur

[LO 5]

COG [Comprehension]

DIF [Medium]

  1. The Revised Domestic Violence Screening Instrument (DVSI-R) (Williams, 2012) is used to (9-13)
  2. Determine the effectiveness of domestic violence programs
  3. Create Big Data for perpetrators of domestic violence prior to arrest
  4. Document decreasing rates of victimization since more victims are willing to come forward
  5. Determine the risk of recidivism by perpetrators of domestic violence

ANS [D]

LOC: Big Data

TIP: Case Study: Predicting Recidivism with Big Data

[LO 5]

COG [Application]

DIF [Medium]

  1. Which is a methodological challenge for comparative research? (9-14)
  2. Small numbers of cases
  3. Spotty historical records
  4. Variable cross-national record-keeping practices
  5. All of the above

ANS [D]

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. Different cultural and linguistic contexts can limit confidence in (9-14)
  2. Measures
  3. Samples
  4. Causal conclusions
  5. All of the above

ANS [D]

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 4]

COG [Application]

DIF [Medium]

  1. Challenges in data collection in comparative research include (9-14)
  2. Uncertainty about methods of data collection
  3. Lack of a maximal fit between concepts in the primary study and concepts in the current investigation
  4. Lack of knowledge about qualifications of who was responsible for data collection
  5. All of the above

ANS [D]

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 2]

COG [Comprehension]

DIF [Medium]

  1. Even when data are collected by an official government agency, there may be concern with (9-15)
  2. Similarities in data collection systems
  3. Political pressures placed on a population after they participated in research
  4. Data quality
  5. The fact that secondary data may not be used to study crime

ANS [C]

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 5]

COG [Comprehension]

DIF [Easy]

  1. Researchers who rely on secondary data make trade-offs between (9-15)
  2. Their desire to ask key questions in addition to a dataset
  3. Their ability to use a particular dataset and the specific hypothesis they can test
  4. The adequacy of an abandoned dataset and a less adequate source of data
  5. Their ability to ask questions that cannot be modified and available data that has been modified

ANS [B]

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 6]

COG [Comprehension]

DIF [Medium]

  1. A problem that comparative researchers often face is (9-15)
  2. Too much data from historical or geographical periods
  3. A lack of data from some historical periods
  4. A lack of data from some geographical units
  5. Both B and C

ANS [D]

LOC: Methodological Issues When Using Secondary Data

TIP: Measuring Across Contexts

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. Using a ____________ sampling strategy, researchers select cases because they reflect theoretically important distinctions (9-16)
  2. Availability
  3. Probability
  4. Purposive
  5. quantitative

ANS [C]

LOC: Methodological Issues When Using Secondary Data

TIP: Sampling Across Time and Place

[LO 2]

COG [Application]

DIF [Medium]

  1. If you use geographic units such as nations to be sampled for comparative purposes, it is assumed that (9-16)
  2. Nations are dependent on each other, which makes it easier to find similarities to compare
  3. Geography doesn’t matter that much in this type of sampling
  4. Common international influences do not affect nations in a way that would disturb the sample
  5. Nations are independent of each other in terms of the variables being examined

ANS [D]

LOC: Methodological Issues When Using Secondary Data

TIP: Sampling Across Time and Place

[LO 1]

COG [Analysis]

DIF [Hard]

  1. John Stuart Mill proposed establishing (9-16)
  2. A causal relation where the values of cases that disagree cannot agree on values of other variables
  3. A causal relation in which the values of cases that agree on an outcome variable also agree on the value of the causal variable
  4. That causality does not depend on agreement
  5. That agreement does not depend on causality

ANS [B]

LOC: Methodological Issues When Using Secondary Data

TIP: Identifying Causes

[LO 6]

COG [Synthesis]

DIF [Hard]

  1. The Freedom of Information Act (FOIA) stipulates that (9-17)
  2. Only persons who have a security clearance of top secret or above can have access to agency records
  3. Most persons may have a right to records unless records are exempted
  4. All persons have a right to access all federal agency records unless the records are specifically exempted
  5. Many persons may have a right to access non-federal agency records

ANS [C]

LOC: Methodological Issues When Using Secondary Data

TIP: Ethical Issues When Analyzing Available Data and Content

[LO 2]

COG Application]

DIF [Medium]

  1. A well-known dataset that collects information on historical and contemporary population characteristics is conducted by the (9-2)
  2. National Crime Victimization Survey
  3. Federal Correction Facilities (Survey of Inmates)
  4. State Census
  5. U. S. Census Bureau

ANS [D]

LOC: What Are Secondary Data?

TIP: Census enumerations: Historical and contemporary population characteristics

[LO 6]

COG [Knowledge]

DIF [Easy]

  1. The General Social Survey is an annual survey high school seniors (since 1972) that collects attitudinal information about (9-2)
  2. Social issues
  3. Alcohol use
  4. Drug use
  5. All of the above

ANS [D]

LOC: What Are Secondary Data?

TIP: Social indicators and behavior

[LO 6]

COG [Knowledge]

DIF [Easy]

  1. Thacher’s study (2011) studied whether racial/ethnic distribution and socioeconomic status had an influence on the degree to which a jurisdiction protected these resources by utilizing (9-3)
  2. UCR police data
  3. U.S. Census data
  4. Both A and B
  5. None of the above

ANS [C]

LOC: What Are Secondary Data?

TIP: Case Study: Police Protection by Neighborhood

[LO 6]

COG [Analysis]

DIF [Medium]

  1. In order to measure poverty, Thacher created a measure called the (9-3)
  2. Income Index (II)
  3. Culture Forces Index (CFI)
  4. Concentration Index (CI)
  5. None of the above

ANS [C]

LOC: What Are Secondary Data?

TIP: Case Study: Police Protection by Neighborhood

[LO 6]

COG [Evalution]

DIF [Hard]

  1. Archer and Gartner created the Comparative Crime Data File (CCDF) in 1984 in order to, among other things, determine if (9-5)
  2. War might decrease the homicide rates within a nation once a war ends
  3. War might increase the homicide rates within a nation once a war ends
  4. War would have no effect on homicide rates
  5. War might only have an effect on homicide rates if it crossed several borders

ANS [B]

LOC: Comparative Methods

TIP: Case Study: Homicide Across Nations

[LO 1]

COG [Evaluation]

DIF [Medium]

  1. The population of documents selected for analysis should be appropriate to the research question of interest and one of the first steps should be to determine the (9-6)
  2. units of analysis to be studied
  3. Concentration Index to be studied
  4. easily identifiable crime maps for an area
  5. underlying factors in interpreting crimes

ANS [A]

LOC: Content Analysis

TIP: Identifying a Population of Documents or Other Textual Sources

[LO 3]

COG [Analysis]

DIF [Medium]

  1. The primary purpose of the gang homicide research done by Rosenfeld, et al. (1999), was to study whether gang membership (9-9)
  2. Pushed members to engage in violence
  3. Exposed members to violent persons
  4. Exposed members to violent situations
  5. All of the above

ANS [D]

LOC: Crime Mapping

TIP: Case Study: Gang Homicides in St. Louis, Missouri

[LO 5]

COG [Synthesis]

DIF [Medium]

TRUE/FALSE

  1. Secondary data analysis is the act of collecting data that were originally collected by someone else at another time. (9-1)
  2. True
  3. False

ANS [A]

LOC: Analyzing Content

TIP: What Are Secondary Data?

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. Unofficial records are a major type of secondary data. (9-2)
  2. True
  3. False

ANS [B]

LOC: Analyzing Content

TIP: What Are Secondary Data?

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. Historical documents are a major type of secondary data. (9-2)
  2. True
  3. False

ANS [A]

LOC: Analyzing Content

TIP: What Are Secondary Data?

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. There many more qualitative datasets available for secondary analysis as quantitative datasets. (9-3)
  2. True
  3. False

ANS [B]

LOC: What Are Secondary Data?

TIP: Qualitative data sources

[LO 6]

COG [Knowledge]

DIF [Easy]

  1. The Concentration Index (CI) measures income equality. (9-3)
  2. True
  3. False

ANS [B]

LOC: What Are Secondary Data?

TIP: Case Study: Police Protection by Neighborhood

[LO 6]

COG [Knowledge]

DIF [Easy]

  1. Descriptive comparative research seeks to understand the structure, nature or scope of a nation’s (or nations’) criminal justice systems. (9-4)
  2. True
  3. False

ANS [A]

LOC: What Are Secondary Data?

TIP: Comparative Methods

[LO 1]

COG [Comprehension]

DIF [Medium]

  1. Research that compares data from more than one time period is called comparative research. (9-5)
  2. True
  3. False

ANS [A]

LOC: What Are Secondary Data?

TIP: Comparative Methods

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. Research that seeks to understand how national systems work is called analytic comparative research. (9-5)
  2. True
  3. False

ANS [A]

LOC: What Are Secondary Data?

TIP: Comparative Methods

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. The Comparative Crime Data File (CCDF) has extensive crime data from 110 nations and 44 international cities. (9-5)
  2. True
  3. False

ANS [A]

LOC: Comparative Methods

TIP: Case Study: Homicide Across Nations

[LO 1]

COG [Knowledge]

DIF [Easy]

  1. Using the CCDF, Archer and Gartner found that very few combatant nations experienced substantial increases in homicide following a war. (9-5)
  2. True
  3. False

ANS [B]

LOC: Comparative Methods

TIP: Case Study: Homicide Across Nations

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. The goal of a content analysis is to develop inferences from text. (9-6)
  2. True
  3. False

ANS [A]

LOC: Comparative Methods

TIP: Content Analysis

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. The first step in content analysis is to identify a population of documents or other textual sources for study. (9-6)
  2. True
  3. False

ANS [A]

LOC: Content Analysis

TIP: Identifying a Population of Documents or Other Textual Sources

[LO 3]

COG [Application]

DIF [Easy]

  1. When using content analysis, the researcher must define the categories into which the units are to be coded and then determine the unit of text to be coded. (9-6)
  2. True
  3. False

ANS [B]

LOC: Content Analysis

TIP: Identifying a Population of Documents or Other Textual Sources

[LO 3]

COG [Application]

DIF [Medium]

  1. After coding procedures are developed, the reliability should be assessed by comparing different coders’ codes for the same variable. (9-7)
  2. True
  3. False

ANS [A]

LOC: Content Analysis

TIP: Identifying a Population of Documents or Other Textual Sources

[LO 3]

COG [Application]

DIF [Medium]

  1. The process of using a geographic information system to conduct special analysis of crime problems and other police-related issues is known as crime mapping (9-8)
  2. True
  3. False

ANS [A]

LOC: Content Analysis

TIP: Crime Mapping

[LO 5]

COG [Application]

DIF [Medium]

  1. Big Data is a dataset that is accessible in computer readable form to reveal small variances between variables. (9-11)
  2. True
  3. False

ANS [B]

LOC: Crime Mapping

TIP: Big Data

[LO 5]

COG [Analysis]

DIF [Medium]

  1. Using secondary data presents no methodological challenges because there is no human contact involved with it. (9-17)
  2. True
  3. False

ANS [B]

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 2]

COG [Comprehension]

DIF [Medium]

  1. Data quality is never a concern with secondary data because they are collected by official agencies. (9-15)
  2. True
  3. False

ANS [B]

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 6]

COG [Application]

DIF [Medium]

  1. One issue with historical data is that (for a variety of reasons) it may be an unrepresentative selection of materials from the past. (9-15)
  2. True
  3. False

ANS [A]

LOC: Methodological Issues When Using Secondary Data

TIP: Measuring Across Contexts

[LO 6]

COG [Application]

DIF [Medium]

  1. There is no ethical concern for subject confidentiality when original records are analyzed. (9-17)
  2. True
  3. False

ANS [B]

LOC: Methodological Issues When using Secondary Data

TIP: Ethical Issues When Analyzing Available Data and Content

[LO 6]

COG [Application]

DIF [Medium]

ESSAY

  1. What is comparative research? Why is it important? What are the difficulties in measurement across contexts? (9-4)

1. Research that seeks to understand the structure, nature, or scope of a nation’s or nations’ criminal justice systems or rates of crime is descriptive comparative research.

2. Research that seeks to understand how national systems work and the factors related to their operations is analytic comparative research.

There is also variability in the scope of comparative research. Studies can examine crime patterns in single nations, make a comparison across several nations, or conduct transnational research, which generally explores how cultures and nations deal with crime that transcends their borders. Investigating terrorism is one emerging form of transnational research. Bennett (2004) notes,

One of the outcomes of the terrorist attacks in 2001 was a shocking awareness that terrorism is international and inextricably tied to transnational criminal activity.... We need to understand how criminal and terrorist organizations fund themselves and exploit our inability to link and analyze criminal activity that transcends national borders. (p. 8)

Although comparative methods are often associated with cross-national comparisons, research examining smaller aggregates such as states and cities can also be subsumed under the comparative research umbrella. Comparative research methods allow for a broader vision about social relations than is possible with cross-sectional research limited to one location.

LOC: What Are Secondary Data?

TIP: Comparative Methods

[LO 1]

COG [Comprehension]

DIF [Medium]

  1. What are the four major types of secondary data analysis? (9-2)

LOC: Analyzing Content

TIP: What Are Secondary Data?

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. Explain how secondary data analysis is different from the methods we have already examined in this book.

Secondary data analysis is analysis of data collected by someone other than the researcher or the researcher’s assistant.

Secondary data analysis has a long history. Since the latter part of the 17th century, people have been monitoring the state of their localities by examining rates of population, mortality, marriage, disease, climate, and crime. Adolphe Quételet, an ambitious Belgian mathematician, was one of the first to show that the annual number of murders reported in France from 1826 to 1831 was relatively constant and, further, that the proportion of murders committed with guns, swords, knives, stones, kicks and punches, and strangulation was also relatively constant. He concluded that although we may not know who will kill whom by what means, we do know, with a high degree of probability, that a certain number of murders of a certain type will happen every year in France (Menand, 2001). This was one of the first attempts to apply the methods of science to social phenomena. You are also probably familiar with Émile Durkheim’s (1951) use of official statistics on suicide rates in different areas to examine the relationship between religion and suicide.

In this chapter, we will tell you about a number of datasets, including surveys and official records that are publicly available for research purposes. Then we will examine several research methods that rely on secondary data, including cross-cultural research, content analysis, and crime mapping. And finally, because using data originally gathered for other purposes poses unique concerns for a researcher, we will spend the latter part of the chapter highlighting these methodological issues.

In general, there are four major types of secondary data: surveys, official statistics, official records, and other historical documents. Although a dataset can be obtained by an agreement between two or more researchers, many researchers obtain data through the Inter-University Consortium for Political and Social Research (ICPSR) (http://www.icpsr.umich.edu). Data stored at ICPSR primarily include surveys, official records, and official statistics. ICPSR stores data and information for nearly 5,000 sources and studies, including those conducted independently and those conducted by the U.S. government. Riedel (2000) has documented the majority of datasets that are available from ICPSR and that are appropriate for crime research, including the following:

Census enumerations: Historical and contemporary population characteristics. The most well-known datasets within this category are the surveys conducted every decade by the U.S. Census Bureau. Linking information from this dataset (e.g., neighborhood characteristics including such things as poverty and residential mobility) to crime data at the same level (e.g., census block, county) has provided researchers with a rich source of data to test theories of crime.

The National Archive of Criminal Justice Data (NACJD). The Bureau of Justice Statistics and National Institute of Justice cosponsored NACJD, which provides more than 600 criminal justice data collections to the public. A sample of these datasets includes the following:

  • Capital Punishment in the United States
  • Expenditure and Employment Data for the Criminal Justice System
  • Gang Involvement in Rock Cocaine Trafficking in Los Angeles, 1984–1985
  • Criminal Careers and Crime Control in Massachusetts
  • Longitudinal Research Design, Phase I, 1940–1965
  • Changing Patterns of Drug Use and Criminality Among Crack Cocaine Users in New York City: Criminal Histories and CJ Processing, 1983–1984, 1986
  • The National Crime Victimization Survey, ongoing
  • National Jail Census
  • National Judicial Reporting Program
  • National Survey of Jails
  • Survey of Adults on Probation
  • Survey of Inmates of Federal Correctional Facilities
  • Survey of Inmates of Local Jails
  • Survey of Inmates of State Correctional Facilities
  • Federal Bureau of Investigation (FBI) Uniform Crime Reporting (UCR) Program data, including the Supplementary Homicide Reports (SHR)

Social indicators and behavior. There is a series of annual surveys under this heading including the General Social Survey, which has been conducted annually by the National Opinion Research Center since 1972. In addition, Monitoring the Future: A Continuing Study of the Lifestyles and Values of Youth is a survey of a nationally representative sample of high school seniors that asks them for many things, including self-reports of drug and alcohol use and their attitudes toward a number of issues. The National Youth Survey Series (1976–1980 and 1983) is another survey available at ICPSR that examines factors related to delinquency.

Qualitative data sources. Far fewer qualitative datasets are available for secondary analysis, but the number is growing. European countries, particularly England, have been in the forefront of efforts to promote archiving of qualitative data. The United Kingdom’s Economic and Social Research Council established the Qualitative Data Archiving Resource Center at the University of Essex in 1994 (Heaton, 2008). Now part of the Economic and Social Data Service, UK Data Service QualiBank (2014) provides access to data from 888 qualitative research projects. After registering at the UK Data Service site, interview transcripts and other materials from many qualitative studies can be browsed or searched directly online, but access to many studies is restricted to users in the United Kingdom or according to other criteria.

In the United States, the ICPSR collection includes an expanding number of studies containing at least some qualitative data or measures coded from qualitative data (over 500 such studies as of 2014). Studies range from transcriptions of original handwritten and published materials relating to infant and child care from the beginning of the 20th century to World War II (LaRossa, 1995) to transcripts of open-ended interviews with high school students involved in violent incidents (Lockwood, 1996). Harvard University’s Institute for Quantitative Social Science has archived more than 400 studies that contain at least some qualitative data (as of July 2014).

The most unique source of qualitative data available for researchers in the United States is the Human Relations Area Files (HRAF) at Yale University. The HRAF has made anthropological reports available for international cross-cultural research since 1949 and currently contains more than 1,000,000 pages of information on more than 400 different cultural, ethnic, religious, and national groups (Ember & Ember, 2011). If you are interested in cross-cultural research, it is well worth checking out the HRAF and exploring access options (reports can be accessed and searched online by those at affiliated institutions).

The University of Southern Maine’s Center for the Study of Lives (http://usm.maine.edu/olli/national/lifestorycenter/) collects interview transcripts that record the life stories of people of diverse ages and backgrounds. As of July 2014, their collection included transcripts from more than 400 life stories, representing more than 35 different ethnic groups, experiences of historical events ranging from the Great Depression to the Vietnam War, and reports on dealing with problems such as substance abuse. These qualitative data are available directly online without any registration or fee.

Case Study: Police Protection by Neighborhood -- As you can see, the research possibilities are almost limitless with the wealth of data already made available to researchers interested in issues of criminology and criminal justice. Using UCR data on the number of police officers and civilians employed within specific jurisdictions, David Thacher (2011) recently examined whether the racial/ethnic distribution and its socioeconomic status influenced the degree to which a jurisdiction was protected by these resources. To answer his question, Thacher matched the UCR police data with U.S. Census data. He stated,

With these data on police strength and the composition of the population served by each agency, I am able to describe the distribution of policing by race and class in the same way that the educational literature has analyzed the distribution of educational resources. (Thacher, 2011, p. 283)

To measure poverty, he created a measure called the Concentration Index (CI), which was actually a measure of income inequality. Thacher (2011) found that when police strength was measured as the number of police employees per crime, it varied substantially between rich and poor areas (see Exhibit 9.1 for comparative results). For example, wealthier jurisdictions tended to have significantly more police employees per 100 index crimes compared to poorer areas. This finding runs counter to the contention that cities generally allocate police resources equitably. Thacher (2011) states, “police protection has become more concentrated in the most advantaged communities—those with the highest per-capita incomes and the largest share of white residents” (p. 286). What has changed, Thacher believes, is the crime rates. For example, when police protection per capita (number of individuals in jurisdiction) is examined, police protection has not changed much since 1970 across jurisdictions. However, because crime became more concentrated in the poorest communities during that time, police resources per crime have become less egalitarian. What does this mean in the real world?

Average # of Police Employees Per

1,000 Residents

100 Index Crimes

100 Violent Crimes

Richest jurisdictions comprising

5% of U.S. population

2.78

21.48

418.04

Poorest jurisdictions comprising

5% of U.S. population

2.69

11.967

120.01

Source: Adapted from Thacher, D. (2011). The distribution of police protection. Journal of Quantitative Criminology, 27(3), 275–298, Table 1.

The result is a growing workload disparity between rich and poor jurisdictions. In rich jurisdictions, each police officer has responsibility for fewer and fewer crimes over time, while in poor jurisdictions this part of the police workload has either remained constant or grown. (Thacher, 2011, p. 289)

LOC: Analyzing Content

TIP: What Are Secondary Data?

[LO 2]

COG [2]

DIF [Medium]

  1. What are some of the datasets available from ICPSR that are appropriate for crime research?

Census enumerations: Historical and contemporary population characteristics. The most well-known datasets within this category are the surveys conducted every decade by the U.S. Census Bureau. Linking information from this dataset (e.g., neighborhood characteristics including such things as poverty and residential mobility) to crime data at the same level (e.g., census block, county) has provided researchers with a rich source of data to test theories of crime.

The National Archive of Criminal Justice Data (NACJD). The Bureau of Justice Statistics and National Institute of Justice cosponsored NACJD, which provides more than 600 criminal justice data collections to the public. A sample of these datasets includes the following:

  • Capital Punishment in the United States
  • Expenditure and Employment Data for the Criminal Justice System
  • Gang Involvement in Rock Cocaine Trafficking in Los Angeles, 1984–1985
  • Criminal Careers and Crime Control in Massachusetts
  • Longitudinal Research Design, Phase I, 1940–1965
  • Changing Patterns of Drug Use and Criminality Among Crack Cocaine Users in New York City: Criminal Histories and CJ Processing, 1983–1984, 1986
  • The National Crime Victimization Survey, ongoing
  • National Jail Census
  • National Judicial Reporting Program
  • National Survey of Jails
  • Survey of Adults on Probation
  • Survey of Inmates of Federal Correctional Facilities
  • Survey of Inmates of Local Jails
  • Survey of Inmates of State Correctional Facilities
  • Federal Bureau of Investigation (FBI) Uniform Crime Reporting (UCR) Program data, including the Supplementary Homicide Reports (SHR)

Social indicators and behavior. There is a series of annual surveys under this heading including the General Social Survey, which has been conducted annually by the National Opinion Research Center since 1972. In addition, Monitoring the Future: A Continuing Study of the Lifestyles and Values of Youth is a survey of a nationally representative sample of high school seniors that asks them for many things, including self-reports of drug and alcohol use and their attitudes toward a number of issues. The National Youth Survey Series (1976–1980 and 1983) is another survey available at ICPSR that examines factors related to delinquency.

Qualitative data sources. Far fewer qualitative datasets are available for secondary analysis, but the number is growing. European countries, particularly England, have been in the forefront of efforts to promote archiving of qualitative data. The United Kingdom’s Economic and Social Research Council established the Qualitative Data Archiving Resource Center at the University of Essex in 1994 (Heaton, 2008). Now part of the Economic and Social Data Service, UK Data Service QualiBank (2014) provides access to data from 888 qualitative research projects. After registering at the UK Data Service site, interview transcripts and other materials from many qualitative studies can be browsed or searched directly online, but access to many studies is restricted to users in the United Kingdom or according to other criteria.

In the United States, the ICPSR collection includes an expanding number of studies containing at least some qualitative data or measures coded from qualitative data (over 500 such studies as of 2014). Studies range from transcriptions of original handwritten and published materials relating to infant and child care from the beginning of the 20th century to World War II (LaRossa, 1995) to transcripts of open-ended interviews with high school students involved in violent incidents (Lockwood, 1996). Harvard University’s Institute for Quantitative Social Science has archived more than 400 studies that contain at least some qualitative data (as of July 2014).

The most unique source of qualitative data available for researchers in the United States is the Human Relations Area Files (HRAF) at Yale University. The HRAF has made anthropological reports available for international cross-cultural research since 1949 and currently contains more than 1,000,000 pages of information on more than 400 different cultural, ethnic, religious, and national groups (Ember & Ember, 2011). If you are interested in cross-cultural research, it is well worth checking out the HRAF and exploring access options (reports can be accessed and searched online by those at affiliated institutions).

The University of Southern Maine’s Center for the Study of Lives (http://usm.maine.edu/olli/national/lifestorycenter/) collects interview transcripts that record the life stories of people of diverse ages and backgrounds. As of July 2014, their collection included transcripts from more than 400 life stories, representing more than 35 different ethnic groups, experiences of historical events ranging from the Great Depression to the Vietnam War, and reports on dealing with problems such as substance abuse. These qualitative data are available directly online without any registration or fee.

Case Study: Police Protection by Neighborhood

As you can see, the research possibilities are almost limitless with the wealth of data already made available to researchers interested in issues of criminology and criminal justice. Using UCR data on the number of police officers and civilians employed within specific jurisdictions, David Thacher (2011) recently examined whether the racial/ethnic distribution and its socioeconomic status influenced the degree to which a jurisdiction was protected by these resources. To answer his question, Thacher matched the UCR police data with U.S. Census data. He stated,

With these data on police strength and the composition of the population served by each agency, I am able to describe the distribution of policing by race and class in the same way that the educational literature has analyzed the distribution of educational resources. (Thacher, 2011, p. 283)

To measure poverty, he created a measure called the Concentration Index (CI), which was actually a measure of income inequality. Thacher (2011) found that when police strength was measured as the number of police employees per crime, it varied substantially between rich and poor areas (see Exhibit 9.1 for comparative results). For example, wealthier jurisdictions tended to have significantly more police employees per 100 index crimes compared to poorer areas. This finding runs counter to the contention that cities generally allocate police resources equitably. Thacher (2011) states, “police protection has become more concentrated in the most advantaged communities—those with the highest per-capita incomes and the largest share of white residents” (p. 286). What has changed, Thacher believes, is the crime rates. For example, when police protection per capita (number of individuals in jurisdiction) is examined, police protection has not changed much since 1970 across jurisdictions. However, because crime became more concentrated in the poorest communities during that time, police resources per crime have become less egalitarian. What does this mean in the real world?

Exhibit 9.1 Police Protection by Race and Class Composition of Jurisdiction, 2011

Average # of Police Employees Per

1,000 Residents

100 Index Crimes

100 Violent Crimes

Richest jurisdictions comprising

5% of U.S. population

2.78

21.48

418.04

Poorest jurisdictions comprising

5% of U.S. population

2.69

11.967

120.01

Source: Adapted from Thacher, D. (2011). The distribution of police protection. Journal of Quantitative Criminology, 27(3), 275–298, Table 1.

The result is a growing workload disparity between rich and poor jurisdictions. In rich jurisdictions, each police officer has responsibility for fewer and fewer crimes over time, while in poor jurisdictions this part of the police workload has either remained constant or grown. (Thacher, 2011, p. 289)

LOC: Content Analysis

TIP: What Are Secondary Data?

[LO 5]

COG [Comprehension]

DIF [Easy]

  1. What is the NACJD? What types of data are available?
  • Capital Punishment in the United States
  • Expenditure and Employment Data for the Criminal Justice System
  • Gang Involvement in Rock Cocaine Trafficking in Los Angeles, 1984–1985
  • Criminal Careers and Crime Control in Massachusetts
  • Longitudinal Research Design, Phase I, 1940–1965
  • Changing Patterns of Drug Use and Criminality Among Crack Cocaine Users in New York City: Criminal Histories and CJ Processing, 1983–1984, 1986
  • The National Crime Victimization Survey, ongoing
  • National Jail Census
  • National Judicial Reporting Program
  • National Survey of Jails
  • Survey of Adults on Probation
  • Survey of Inmates of Federal Correctional Facilities
  • Survey of Inmates of Local Jails
  • Survey of Inmates of State Correctional Facilities
  • Federal Bureau of Investigation (FBI) Uniform Crime Reporting (UCR) Program data, including the Supplementary Homicide Reports (SHR)

LOC: What Are Secondary Data?

TIP: The National Archive of Criminal Justice Data (NACJD)

[LO 5]

COG [Comprehension]

DIF [Easy]

  1. Describe the steps necessary when performing a content analysis.

Content analysis bears some similarities to qualitative data analysis because it involves coding and categorizing text, discovering relationships among constructs identified in the text, and a statistical analysis of those findings. Content analysis is also similar to secondary data analysis because it involves taking data or text that already exists and subjecting it to a new form of analysis; however, unlike secondary analysis of previously collected quantitative data, content analysis also involves sampling and measurement of primary data. Content analysis techniques can be used with all forms of messages, including visual images, sounds, and interaction patterns as well as written text (Neuendorf, 2002).

Identifying a Population of Documents or Other Textual Sources: The population of documents that is selected for analysis should be appropriate to the research question of interest. Words or other features of these units are then coded in order to measure the variables involved in the research question. The content analysis involves the following steps or stages (Weber, 1985).

1. Identify a population of documents or other textual sources for study. This population should be selected for its appropriateness to the research question of interest. Perhaps the population will be all newspapers published in the United States, college student newspapers, nomination speeches at political party conventions, or State of the Nation speeches by national leaders.

2. Determine the units of analysis. These could be newspaper articles, whole newspapers, television episodes, or political conventions.

3. Select a sample of units from the population. The most basic strategy might be a simple random sample of documents. However, a stratified sample might be needed to ensure adequate representation of community newspapers in large and small cities, of weekday and Sunday papers, or of political speeches during election years and in off years (see Chapter 4).

4. Design coding procedures for the variables to be measured. This requires deciding what unit of text to code, such as words, sentences, paragraphs, or newspaper pages. Then, the categories into which the units are to be coded must be defined. These categories may be broad, such as supports democracy, or narrow, such as supports universal suffrage.

5. Test and refine the coding procedures. Clear instructions and careful training of coders are essential.

6. Base statistical analyses on counting occurrences of particular items. These could be words, themes, or phrases. You will also need to test relations between different variables.

Developing reliable and valid coding procedures is not an easy task. The meaning of words and phrases is often ambiguous. As a result, coding procedures cannot simply categorize and count words; text segments in which the words are embedded must also be inspected before codes are finalized. Because different coders may perceive different meanings in the same text segments, explicit coding rules are required to ensure coding consistency. Special dictionaries can be developed to keep track of how the categories of interest are defined in the study (Weber, 1985).

After coding procedures are developed, their reliability should be assessed by comparing different coders’ codes for the same variables. The criteria for judging quantitative content analyses of text reflect the same standards of validity applied to data collected with other quantitative methods. We must review the sampling approach, the reliability and validity of the measures, and the controls used to strengthen any causal conclusions.

LOC: Comparative Methods

TIP: Content Analysis

[LO 3]

COG [Application]

DIF [Medium]

  1. Describe the different research questions crime mapping can answer.

1. It provides visual and statistical analyses of the spatial nature of crime and other events.

2. It allows the linkage of crime data to other data sources, such as census information on poverty or school information, which allows relationships between variables to be established.

3. It provides maps to visually communicate analysis results.

Although applied crime mapping has been used for over 100 years to assist the police in criminal apprehension and crime prevention, the type of crime mapping we will discuss here is related to mapping techniques used for traditional research purposes (e.g., testing theory about the causes of crime), not for investigative purposes. With the advent of computing technology, crime mapping has become an advanced form of statistical data analysis. The geographic information system (GIS) is the software tool that has made crime mapping increasingly available to researchers since the 1990s.

Crime mapping Geographical mapping strategies used to visualize a number of things including location, distance, and patterns of crime and their correlates.

Geographic information system (GIS) The software tool that has made crime mapping increasingly available to researchers since the 1990s.

Today, crime mapping is being used by the majority of urban law enforcement agencies to identify crime hot spots. Hot spots are geospatial locations within jurisdictions where crimes are more likely to occur compared to other areas. Being able to understand where crime is more likely to occur helps agencies deploy resources more effectively, especially for crime prevention purposes. These hot spots can be specific addresses, blocks, or even clusters of blocks (Eck, Chainey, Cameron, Leitner, & Wilson, 2005). Of course, crime mapping data with insight from criminological theory is the ideal. As Eck and his colleagues explain, “Crime theories are critical for useful crime mapping because they aid interpretation and data and provide guidance as to what actions are most appropriate” (Eck et al., 2005, p. 3). This is important because the ability to understand why crimes are occurring has a great deal to do with underlying factors related to the environment in which they occur. Kennedy and his colleagues provide a very illuminating example.

A sole analytical focus on crime hot spots is like observing that children frequently play at the same place every day and then calling that place a hot spot for children playing, but without acknowledging the presence of swings, slides, and open fields—features of the place (i.e., suggestive of a playground) that attract children there instead of other locations absent such entertaining features. (Caplan, Kennedy, & Piza, 2013, pp. 245–246)

Through various symbols, maps can communicate a great deal of information. Exhibit 9.2, which was published by the National Institute of Justice, displays some common symbols used by crime analysts (Eck et al., 2005). As can be seen on the map, dots (A) point to specific places where crime is likely to occur, a crime site (B and C) indicates where crime is equally likely to occur within a particular site, and a crime gradient (D) indicates that the probably of crime is most likely inside the site and decreases as you move toward the edge of the site.

LOC: Content Analysis

TIP: Crime Mapping

[LO 4]

COG [Analysis]

DIF [Medium]

  1. How can computer technology allow us to analyze Big Data? What effects has this had on criminal justice-related research? (9-11)

Big Data: A very large data set (e.g. contains thousands of cases) that is accessible in computer-readable form that is used to reveal patterns, trends, and associations between variables with new computer technology.

Here are some examples of what now qualifies as Big Data (Mayer-Schönberger & Cukier, 2013): Facebook users upload more than 10 million photos every hour and leave a comment or click on a “like” button almost three billion times per day; YouTube users upload more than an hour of video every second; Twitter users were already sending more than 400 million tweets per day in 2012. If all this and other forms of stored information in the world were printed in books, one estimate in 2013 was that these books would cover the face of the Earth 52 layers thick. That’s “Big.”

All this information would be of no more importance than the number of grains of sand on the beach except that these numbers describe information produced by people, available to social scientists, and manageable with today’s computers. Already, Big Data analyses are being used to predict the spread of flu, the behavior of consumers, and predictions of crime.

Here’s a quick demonstration: We talked about school shootings in Chapter 1, which are a form of mass murder. We think of mass murder as a relatively recent phenomenon but you may be surprised to learn that it has been written about for decades. One way to examine inquiries into mass murder is to see how frequently the phrase mass murder has appeared in all the books ever written in the world. It is now possible with the click of a mouse to answer that question, although with two key limitations: we can only examine books written in English and in several other languages and, as of 2014, we are limited to only one quarter of all books ever published—a mere 30 million books (Aiden & Michel, 2013, p. 16).

To check this out, go to the Google Ngrams site (https://books.google.com/ngrams), type in mass murder and serial murder, check the “case-insensitive” box, and change the ending year to 2010. Exhibit 9.4 shows the resulting screen (if you don’t obtain a graph, try using a different browser). Note that the height of a graph line represents the percentage that the term represents of all words in books published in each year, so a rising line means greater relative interest in the word, not simply more books being published. You can see that mass murder emerges in the early 20th century, while serial murder did not begin to appear until much later, in the 1980s. It’s hard to stop checking other ideas by adding in other terms, searching in other languages, or shifting to another topic entirely.

Analysis of Big Data is already changing lives. For example, Jeremy Ginsberg and his colleagues (2009) at Google realized they could improve the response to the spread of flu around the world by taking advantage of the fact that about 90 million U.S. adults search online for information about specific illnesses each year. Ginsberg et al. started a collaboration with the U.S. Centers for Disease Control and Prevention (CDC), which collects data every year from about 2,700 health centers about patients’ flu symptoms (Butler, 2013). By comparing this official CDC data with information from the Google searches, Ginsberg and his colleagues were able to develop a Big Data–based procedure for predicting the onset of the flu. How? Rather than having to wait for patients to visit doctors and for the doctors to file reports—as the CDC does—the Google search approach uncovers trends when people first start to experience symptoms and are searching with Google to find out about their symptoms and possible diagnosis. Our next case study illuminates how the law enforcement agencies are also harnessing Big Data to make predictions.

Ngrams Frequency graphs produced by Google’s database of all words printed in more than one third of the world’s books over time (with coverage still expanding).

LOC: Crime Mapping

TIP: Big Data

[LO 5]

COG [Application]

DIF [Medium]

  1. Describe the strengths and limitations of conducting secondary data analysis.

Analysis of secondary data presents several challenges, ranging from uncertainty about the methods of data collection to the lack of maximal fit between the concepts that the primary study measured and each of the concepts that are the focus of the current investigation. Responsible use of secondary data requires a good understanding of the primary data source. The researcher should be able to answer the following questions (most of which were adopted from Riedel, 2000, and Stewart, 1984):

1. What were the agency’s goals in collecting the data? If the primary data were obtained in a research project, what were the project’s purposes?

2. Who was responsible for data collection, and what were their qualifications? Are they available to answer questions about the data? Each step in the data collection process should be charted and the personnel involved identified.

3. What data were collected, and what were they intended to measure?

4. When was the information collected?

5. What methods were used for data collection? Copies of the forms used for data collection should be obtained, and the way in which these data are processed by the agency or agencies should be reviewed.

6. How is the information organized (by date, event, etc.)? Are there identifiers that are used to identify the different types of data available on the same case? In what form are the data available (computer tapes, disks, paper files)? Answers to these questions can have a major bearing on the work that will be needed to carry out the study.

7. How consistent are the data with data available from other sources?

8. What is known about the success of the data collection effort? How are missing data indicated? What kind of documentation is available?

Answering these questions helps ensure that the researcher is familiar with the data he or she will analyze and can help identify any problems with it.

Data quality is always a concern with secondary data, even when the data are collected by an official government agency. The need for concern is much greater in research across national boundaries, because different data-collection systems and definitions of key variables may have been used (Glover, 1996). Census counts can be distorted by incorrect answers to census questions as well as by inadequate coverage of the entire population (Rives & Serow, 1988). Social and political pressures may influence the success of a census in different ways in different countries. These influences on records are particularly acute for crime data. For example, Archer and Gartner (1984) note, “It is possible, of course, that many other nations also try to use crime rate fluctuations for domestic political purposes—to use ‘good’ trends to justify the current administration or ‘bad’ trends to provide a mandate for the next” (p. 16).

Researchers who rely on secondary data inevitably make trade-offs between their ability to use a particular dataset and the specific hypotheses they can test. If a concept that is critical to a hypothesis was not measured adequately in a secondary data source, the study might have to be abandoned until a more adequate source of data can be found. Alternatively, hypotheses or even the research question itself may be modified in order to match the analytic possibilities presented by the available data (Riedel, 2000).

Measuring Across Contexts: One problem that comparative research projects often confront is the lack of data from some historical periods or geographical units (Rueschemeyer, Stephens, & Stephens, 1992; Walters, James, & McCammon, 1997). The widely used U.S. UCR Program did not begin until 1930 (Rosen, 1995). Sometimes alternative sources of documents or estimates for missing quantitative data can fill in gaps (Zaret, 1996), but even when measures can be created for key concepts, multiple measures of the same concepts are likely to be out of the question; as a result, tests of reliability and validity may not be feasible. Whatever the situation, researchers must assess the problem honestly and openly (Bollen, Entwisle, & Alderson, 1993).

Those measures that are available are not always adequate. What remains in the historical archives may be an unrepresentative selection of materials from the past. At various times, some documents could have been discarded, lost, or transferred elsewhere for a variety of reasons. “Original” documents may be transcriptions of spoken words or handwritten pages and could have been modified slightly in the process; they could also be outright distortions (Erikson, 1966; Zaret, 1996). When relevant data are obtained from previous publications, it is easy to overlook problems of data quality, but this simply makes it all the more important to evaluate the primary sources. Developing a systematic plan for identifying relevant documents and evaluating them is very important.

A somewhat more subtle measurement problem is that of establishing measurement equivalence. The meaning of concepts and the operational definition of variables may change over time and between nations or regions (Erikson, 1966). The value of statistics for particular geographic units such as counties may vary over time simply due to change in the boundaries of these units (Walters et al., 1997). As Archer and Gartner (1984) note,

These comparative crime data were recorded across the moving history of changing societies. In some cases, this history spanned gradual changes in the political and social conditions of a nation. In other cases, it encompassed transformations so acute that it seems arguable whether the same nation existed before and after. (p. 15)

Such possibilities should be considered, and any available opportunity should be taken to test for their effects.

A different measurement concern can arise as a consequence of the simplifications made to facilitate comparative analysis. In many qualitative comparative analyses, the values of continuous variables are dichotomized. For example, nations may be coded as democratic or authoritarian. This introduces an imprecise and arbitrary element into the analysis (Lieberson, 1991). On the other hand, for some comparisons, qualitative distinctions such as simple majority rule or unanimity required may capture the important differences between cases better than quantitative distinctions. It is essential to inspect carefully the categorization rules for any such analysis and to consider what form of measurement is both feasible and appropriate for the research question being investigated (King, Keohane, & Verba, 1994).

Sampling Across Time and Place: Although a great deal can be learned from the intensive focus on one nation or another unit, the lack of a comparative element shapes the type of explanations that are developed. Qualitative comparative studies are likely to rely on availability samples or purposive samples of cases. In an availability sample, researchers study a case or multiple cases simply because they are familiar with or have access to them. When using a purposive sampling strategy, researchers select cases because they reflect theoretically important distinctions. Quantitative comparative researchers often select entire populations of cases for which the appropriate measures can be obtained.

When geographic units such as nations are sampled for comparative purposes, it is assumed that the nations are independent of each other in terms of the variables examined. Each nation can then be treated as a separate case for identifying possible chains of causes and effects. However, in a very interdependent world, this assumption may be misplaced; nations may develop as they do because of how other nations are developing (and the same can be said of cities and other units). As a result, comparing the particular histories of different nations may overlook the influence of global culture, international organizations, or economic dependency. These common international influences may cause the same pattern of changes to emerge in different nations; looking within the history of these nations for the explanatory influences would lead to spurious conclusions. The possibility of such complex interrelations should always be considered when evaluating the plausibility of a causal argument based on a comparison between two apparently independent cases (Jervis, 1996).

Identifying Causes: Some comparative researchers use a systematic method for identifying causes, developed by the English philosopher John Stuart Mill (1872), called the method of agreement (see Exhibit 9.6). The core of this approach is the comparison of nations (cases) in terms of similarities and differences on potential causal variables and the phenomenon to be explained. For example, suppose three nations that have all developed democratic political systems are compared in terms of four socioeconomic variables hypothesized by different theories to influence violent crime. If the nations differ in terms of three of the variables but are similar in terms of the fourth, this is evidence that the fourth variable influences violent crime.

The features of the cases selected for comparison have a large impact on the ability to identify influences using the method of agreement. Cases should be chosen for their difference in terms of key factors hypothesized to influence the outcome of interest and their similarity on other, possibly confounding factors (Skocpol, 1984). For example, in order to understand how unemployment influences violent crime, you would need to select cases for comparison that differ in unemployment rates so that you could then see if they differ in rates of violence (King et al., 1994).

Exhibit 9.6 John Stuart Mill’s Method of Agreement

Variable

Case 1

Case 2

Case 3

A

Different

Different

Different

B

Different

Same

Same

C

Different

Different

Different

Da

Same

Same

Same

Outcome

Same

Same

Same

aD is considered the cause of the outcome.

Method of agreement A method proposed by John Stuart Mill for establishing a causal relation in which the values of cases that agree on an outcome variable also agree on the value of the variable hypothesized to have a causal effect, whereas they differ in terms of other variables

LOC: Big Data

TIP: Methodological Issues When Using Secondary Data

[LO 6]

COG [Application]

DIF [Medium]

  1. What are the ethical concerns when dealing with secondary data? (9-17)

Freedom of Information Act (FOIA): This federal law stipulates that all persons have a right to access all federal agency records unless the records are specifically exempted.

Subject confidentiality is a key concern when original records are analyzed. Whenever possible, all information that could identify individuals should be removed from the records to be analyzed so that no link is possible to the identities of living subjects or the living descendants of subjects (Huston & Naylor, 1996). When you use data that have already been archived, you need to find out what procedures were used to preserve subject confidentiality. The work required to ensure subject confidentiality probably will have been done for you by the data archivist. For example, the ICPSR examines carefully all data deposited in the archive for the possibility of disclosure risk. All data that might be used to identify respondents is altered to ensure confidentiality, including removal of information such as birth dates or service dates, specific incomes, or place of residence that could be used to identify subjects indirectly (see http://www.icpsr.umich.edu/icpsrweb/content/datamanagement/confidentiality/). If all information that could be used in any way to identify respondents cannot be removed from a dataset without diminishing its quality (such as by preventing links to other essential data records), ICPSR restricts access to the data and requires that investigators agree to conditions of use that preserve subject confidentiality.

It is not up to you to decide whether there are any issues of concern regarding human subjects when you acquire a dataset for secondary analysis from a responsible source. The institutional review board (IRB) for the protection of human subjects at your college, university, or other institution has the responsibility to decide whether it needs to review and approve proposals for secondary data analysis. The federal regulations are not entirely clear on this point, so the acceptable procedures will vary among institutions based on what their IRBs have decided.

Ethical concerns are multiplied when surveys are conducted or other data are collected in other countries. If the outside researcher lacks much knowledge of local norms, values, and routine activities, the potential for inadvertently harming subjects is substantial. For this reason, cross-cultural researchers should spend time learning about each of the countries in which they plan to collect primary data and establish collaborations with researchers in those countries (Hantrais & Mangen, 1996). Local advisory groups may also be formed in each country so that a broader range of opinion is solicited when key decisions must be made. Such collaboration can also be invaluable when designing instruments, collecting data, and interpreting results.

LOC: Methodological Issues When Using Secondary Data

TIP: Ethical Issues When Analyzing Available Data and Content

[LO 6]

COG [Analysis]

DIF [Medium]

  1. Why are there fewer qualitative datasets available for secondary data analysis than quantitative datasets? Is that changing? Why? (9-2)

In the United States, the ICPSR collection includes an expanding number of studies containing at least some qualitative data or measures coded from qualitative data (over 500 such studies as of 2014). Studies range from transcriptions of original handwritten and published materials relating to infant and child care from the beginning of the 20th century to World War II (LaRossa, 1995) to transcripts of open-ended interviews with high school students involved in violent incidents (Lockwood, 1996). Harvard University’s Institute for Quantitative Social Science has archived more than 400 studies that contain at least some qualitative data (as of July 2014).

The most unique source of qualitative data available for researchers in the United States is the Human Relations Area Files (HRAF) at Yale University. The HRAF has made anthropological reports available for international cross-cultural research since 1949 and currently contains more than 1,000,000 pages of information on more than 400 different cultural, ethnic, religious, and national groups (Ember & Ember, 2011). If you are interested in cross-cultural research, it is well worth checking out the HRAF and exploring access options (reports can be accessed and searched online by those at affiliated institutions).

The University of Southern Maine’s Center for the Study of Lives (http://usm.maine.edu/olli/national/lifestorycenter/) collects interview transcripts that record the life stories of people of diverse ages and backgrounds. As of July 2014, their collection included transcripts from more than 400 life stories, representing more than 35 different ethnic groups, experiences of historical events ranging from the Great Depression to the Vietnam War, and reports on dealing with problems such as substance abuse. These qualitative data are available directly online without any registration or fee.

LOC: Analyzing Content

TIP: What Are Secondary Data?

[LO 2]

COG [Comprehension]

DIF [Medium]

  1. Describe research by Duwe, Donnay, and Tewksbury (2008) regarding the effects of Minnesota’s residency restriction statute on recidivism of registered sex offenders. (9-10)

Duwe and his colleagues (2008) investigated four criteria to classify a reoffense as preventable: (a) the means by which offenders established contact with their victims, (b) the distance between an offender’s residence and where first contact was established (i.e., 1,000 feet; 2,500 feet; one mile), (c) the type of location where contact was established (e.g., Was it a place where children congregated?), and (d) whether the victim was under the age of 18. To be classified as preventable through housing restrictions, an offense had to meet certain criteria. For example, the offender would have had to establish direct contact with a juvenile victim within one mile of his or her residence at a place where children congregate (e.g., park, school). Results indicated that the majority of offenders, as in all cases of sexual violence, victimized someone they already knew. Only 35% of the sex offender recidivists established new direct contact with a victim, but these victims were more likely to be adults than children, and the contact usually occurred more than a mile away from the offender’s residence. Of the few offenders who directly established new contact with a juvenile victim within close proximity of their residence, none did so near a school, a park, a playground, or other locations included in residential restriction laws.

LOC: Crime Mapping

TIP: Case Study: Using Google Earth to Track Sexual Offending Recidivism

[LO 4]

COG [Synthesis]

DIF [Hard]

  1. What is Risk Terrain Modeling (RTM)? How is it used? (9-12)

As we just highlighted, crime mapping allows law enforcement agencies to estimate where hot spots of crime are occurring, that is, where they have been most likely to occur in the past. Joel Caplan and Leslie Kennedy from the Rutgers School of Criminal Justice have pioneered a new way to forecast crime using Big Data called Risk-Terrain Modeling (RTM) (Kennedy, Caplan, & Piza, 2012). Using data from several sources, this modeling predicts the probability of crime occurring in the future using the underlying factors of the environment that are associated with illegal behavior. The important difference between this and regular crime mapping is that it takes into account features of the area that enable criminal behavior.

The process weights these factors, which are the independent variables, and places them into a final model that produces a map of places where criminal behavior is most likely to occur. In this way, the predicted probability of future crime is the dependent variable. This modeling is essentially special risk analysis in a more sophisticated way compared to the early maps of the Chicago school. Kenney and his colleagues (2012) explain:

Operationalizing the spatial influence of a crime factor tells a story, so to speak, about how that feature of the landscape affects behaviors and attracts or enables crime occurrence at places to and far away from the feature itself. When certain motivated offenders interact with suitable targets, the risk of crime and victimization conceivably increases. But, when motivated offenders interact with suitable targets at certain places, the risk of criminal victimization is even higher. Similarly, when certain motivated offenders interact with suitable targets at places that are not conducive to crime, the risk of victimization is lowered. (p. 24)

Using data from many sources, RTM statistically computes the probability of particular kinds of criminal behavior occurring in a place. For example, Exhibit 9.5 displays a Risk Terrain Map that was produced for Irvington, New Jersey. From the map, you can see that several variables were included in the model predicting the potential for shootings to occur, including the presence of gangs and drugs along with other infrastructure information such as the location of bars and liquor stores. Why were these some of the factors used? Because previous research and police data indicated that shootings were more likely to occur where gangs, drugs, and these businesses were present. This does not mean that a shooting will occur in the high-risk areas; it only means that it is more likely to occur in these areas compared to other areas. RTM is considered Big Data because it examines multiple datasets that share geographic location as a common denominator.

Risk-Terrain Modeling (RTM) Uses data from several sources to predict the probability of crime occurring in the future using the underlying factors of the environment that are associated with illegal behaviour.

LOC: Big Data

TIP: Case Study: Predicting Where Crime Will Occur

[LO 5]

COG [Analysis]

DIF [Medium]

  1. What problems do researchers confront when dealing with qualitative comparative research across contexts? (9-4)

1. Research that seeks to understand the structure, nature, or scope of a nation’s or nations’ criminal justice systems or rates of crime is descriptive comparative research.

2. Research that seeks to understand how national systems work and the factors related to their operations is analytic comparative research.

There is also variability in the scope of comparative research. Studies can examine crime patterns in single nations, make a comparison across several nations, or conduct transnational research, which generally explores how cultures and nations deal with crime that transcends their borders. Investigating terrorism is one emerging form of transnational research. Bennett (2004) notes,

One of the outcomes of the terrorist attacks in 2001 was a shocking awareness that terrorism is international and inextricably tied to transnational criminal activity.... We need to understand how criminal and terrorist organizations fund themselves and exploit our inability to link and analyze criminal activity that transcends national borders. (p. 8)

Although comparative methods are often associated with cross-national comparisons, research examining smaller aggregates such as states and cities can also be subsumed under the comparative research umbrella. Comparative research methods allow for a broader vision about social relations than is possible with cross-sectional research limited to one location.

Comparative research is research comparing data from more than one time period and/or more than one nation.

Descriptive comparative research is research that seeks to understand the structure, nature, or scope of a nation’s or nations’ criminal justice systems or rates of crime.

Analytic comparative research is research that seeks to understand how national systems work and the factors related to their operations.

Transnational research explores how cultures and nations deal with crime that transcends their borders.

LOC: What Are Secondary Data?

TIP: Comparative Methods

[LO 1]

COG [Comprehension]

DIF [Medium]

  1. What is the systematic method for identifying causes developed by English philosopher John Stuart Mill (1872)? Describe the approach. (9-16)

The features of the cases selected for comparison have a large impact on the ability to identify influences using the method of agreement. Cases should be chosen for their difference in terms of key factors hypothesized to influence the outcome of interest and their similarity on other, possibly confounding factors (Skocpol, 1984). For example, in order to understand how unemployment influences violent crime, you would need to select cases for comparison that differ in unemployment rates so that you could then see if they differ in rates of violence (King et al., 1994).

Exhibit 9.6 John Stuart Mill’s Method of Agreement

Variable

Case 1

Case 2

Case 3

A

Different

Different

Different

B

Different

Same

Same

C

Different

Different

Different

Da

Same

Same

Same

Outcome

Same

Same

Same

aD is considered the cause of the outcome.

Source: Adapted from Skocpol, T. (1984). Emerging agendas and recurrent strategies in historical sociology, p. 379. In T. Skocpol (Ed.), Vision and method in historical sociology (pp. 356–391). New York, NY: Cambridge University Press.

Method of agreement A method proposed by John Stuart Mill for establishing a causal relation in which the values of cases that agree on an outcome variable also agree on the value of the variable hypothesized to have a causal effect, whereas they differ in terms of other variables

LOC: Methodological Issues When Using Secondary Data

TIP: Identifying Causes

[LO 4]

COG [Synthesis]

DIF [Hard]

Document Information

Document Type:
DOCX
Chapter Number:
6
Created Date:
Aug 21, 2025
Chapter Name:
Chapter 6 Causation And Experimentation
Author:
Ronet D. Bachman

Connected Book

Criminology Research 4e | Test Bank by Ronet D. Bachman

By Ronet D. Bachman

Test Bank General
View Product →

$24.99

100% satisfaction guarantee

Buy Full Test Bank

Benefits

Immediately available after payment
Answers are available after payment
ZIP file includes all related files
Files are in Word format (DOCX)
Check the description to see the contents of each ZIP file
We do not share your information with any third party