Ch.7 Survey Research Complete Test Bank nan - Criminology Research 4e | Test Bank by Ronet D. Bachman by Ronet D. Bachman. DOCX document preview.

Ch.7 Survey Research Complete Test Bank nan

Chapter 10: Evaluation and Policy Analysis

Test Bank

MULTIPLE CHOICE

  1. Research that has an impact on policy is known as (10-1)
  2. Exploratory research
  3. Descriptive research
  4. Explanatory research
  5. Applied research

LOC: Evaluation and Policy Analysis

TIP: Evaluation and Policy Analysis

[LO 1]

COG [Knowledge]

DIF [Easy]

  1. Evaluation research is conducted for the purpose of (10-2)
  2. explaining why variables affect one another
  3. describing a population
  4. investigating social programs
  5. systematically assessing the usefulness of exploratory research

LOC: Evaluation and Policy Analysis

TIP: A Brief History of Evaluation Research

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. Today, professional evaluation researchers have realized that (10-2)
  2. It is more than enough to simply perform experiments to see if a program works
  3. It is not enough to perform rigorous experiments to determine program efficacy
  4. They are responsible for making sure that their results can be understood and utilized by practitioners so that they can make decisions about keeping or ending programs
  5. Both B and C are true

LOC: Evaluation and Policy Analysis

TIP: A Brief History of Evaluation Research

[LO 2]

COG [Comprehension]

DIF [Easy]

  1. The impact of the program process on the cases processed is known as (10-3)
  2. Incomes
  3. Inputs
  4. Outcomes
  5. Program process

LOC: Evaluation and Policy Analysis

TIP: A Brief History of Evaluation Research

[LO 3]

COG [Application]

DIF [Medium]

  1. Resources, raw materials, clients, and staff that go into a program as known as (10-3)
  2. Incomes
  3. Inputs
  4. Outcomes
  5. Program process

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 3]

COG [Knowledge]

DIF [Medium]

  1. Program process refers to the (10-3)
  2. Inputs that go into a program
  3. Direct products of the program’s service delivery
  4. The complete treatment or service delivered by the program
  5. All of the above

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. Outputs are the (10-3)
  2. Direct products of the program’s service delivery
  3. Complete treatment or service delivered by the program
  4. Resources and raw materials that go into a program
  5. Services delivered or new products produced by the program process

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. Information about service delivery system outputs, outcomes or operations is (10-4)
  2. Looped information
  3. Feedback
  4. Stakeholding information
  5. confidential

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. Stakeholders are (10-4)
  2. Individuals and groups who have some basis of concern with the program
  3. People who have no connection to a program
  4. Clients, staff, managers or funders of the program
  5. Only A and C

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 3]

COG [Knowledge]

DIF [Easy]

  1. Evaluation research and traditional social science are different because (10-4)
  2. Evaluation is not designed to test the implications of a social theory
  3. The goal of evaluation research is not to create a broad theoretical explanation for what is found
  4. Researchers cannot design evaluation studies simply in accord with the highest scientific standards
  5. All of the above

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 7]

COG [Comprehension]

DIF [Medium]

  1. The goal of evaluation research is (10-5)
  2. Different than the goal of other types of social science research
  3. Primarily the same as the goal for all social science research
  4. To cut as many programs as possible in order to save the government millions of dollars
  5. None of the above

LOC: Evaluation Basics

TIP: Evaluation Alternatives

[LO 2]

COG [Comprehension]

DIF [Medium]

  1. The focus of evaluation projects include (10-5)
  2. Discovering the program’s impact
  3. Understanding how efficient the program is
  4. Both A and B
  5. Neither A nor B

LOC: Evaluation Basics

TIP: Evaluation Alternatives

[LO 1]

COG [Application]

DIF [Medium]

  1. A needs assessment attempts to discover (10-5)
  2. The demographics of a population in order to decide if a program is necessary
  3. Whether a population actually needs to have a particular program
  4. Both A and B
  5. Neither A nor B

LOC: Evaluation Alternatives

TIP: Do We Need the Program?

[LO 1]

COG [Analysis]

DIF [Hard]

  1. A type of evaluation research that is conducted to determine whether it is feasible to evaluate a program’s effects with available time and resources is known as (10-6)
  2. An assessment evaluation
  3. An evaluability assessment
  4. A needs assessment
  5. An operational assessment

LOC: Evaluation Alternatives

TIP: Can the Program Be Evaluated?

[LO 1]

COG [Analysis]

DIF [Hard]

  1. Once a program has been started, it is important to determine whether the program is reaching the target individuals or groups and to discover if the program is actually operating as expected. This is called (10-6)
  2. An assessment evaluation
  3. A needs assessment
  4. A process evaluation
  5. An evaluability assessment

LOC: Evaluation Alternatives

TIP: Is the Program Working as Planned?

[LO 3]

COG [Application]

DIF [Hard]

  1. The U.S. Department of Justice (DOJ) Project Safe Neighborhoods, provided funding to 12 relatively large cities in order to (10-7)
  2. Develop and implement a comprehensive anti-gang initiative (CAGI)
  3. Develop and implement a participant observation study of neighborhoods
  4. Continue a comprehensive research project of inner city neighborhoods
  5. All of the above

LOC: Evaluation Alternatives

TIP: Case Study: Process Evaluation of an Anti-Gang Initiative

[LO 3]

COG [Applied]

DIF [Hard]

  1. The analysis of the extent to which a treatment or other service as the intended effect is a(n) (10-8)
  2. Program evaluation
  3. Resource evaluation
  4. Outcome evaluation
  5. Impact evaluation

LOC: Evaluation Alternatives

TIP: Did the Program Work?

[LO 3]

COG [Analysis]

DIF [Hard]

  1. The bulk of the published evaluation studies in criminal justice are devoted to some type of (10-8)
  2. Participant observation research
  3. Impact assessment
  4. Outcome assessment
  5. Quantitative research

LOC: Evaluation Alternatives

TIP: Did the Program Work?

[LO 3]

COG [Evaluation]

DIF [Hard]

  1. Which type of research is the preferred method for maximizing internal validity? (10-9)
  2. Surveys
  3. Experiments
  4. Participant observation
  5. Evaluation

LOC: Evaluation Alternatives

TIP: Did the Program Work?

[LO 6]

COG [Evaluation]

DIF [Hard]

  1. If current research participants who are already in a program are compared with nonparticipants (10-9)
  2. It is likely that the treatment group will be comparable to the control group
  3. It is probable that the control group will not be comparable to the treatment group
  4. It is unlikely that the treatment group will be comparable to the control group
  5. These two groups cannot be compared in research

LOC: Evaluation Alternatives

TIP: Did the Program Work?

[LO 6]

COG [Application]

DIF [Hard]

  1. A type of evaluation research that compares program costs to program effects is (10-10)
  2. A cost-benefit analysis
  3. A cost-effectiveness analysis
  4. Both A and B
  5. Neither A nor B

LOC: Evaluation Alternatives

TIP: Was the Program Worth It?

[LO 1]

COG [Application]

DIF [Hard]

  1. This type of evaluation occurs when an evaluation of program outcomes ignores the process by which the program produced the effect (10-11)
  2. Effective box evaluation
  3. Black box evaluation
  4. Hidden box evaluation
  5. Midnight blue box evaluation

LOC: Design Decisions

TIP: Black Box Evaluation or Program Theory

[LO 4]

COG [Analysis]

DIF [Medium]

  1. An approach that encourages researchers to be responsive to program stakeholders is (10-12)
  2. Utilization-focused evaluation
  3. Politically based evaluation
  4. Supportive relation evaluation
  5. Action-focused evaluation

LOC: Design Decisions

TIP: Researcher or Stakeholder Orientation

[LO 5]

COG [Analysis]

DIF [Hard]

  1. When at least one or more elements of a true experimental design is absent, it is known as a (10-16)
  2. Nonprobability study
  3. Quasi-experimental design
  4. Randomly assigned control
  5. None of the above

LOC: Strengths of Randomized Experimental Designs in Impact Evaluation

TIP: When Experiments Are Not Feasible

[LO 6]

COG [Comprehension]

DIF [Hard]

  1. The most powerful alternatives to true randomized experimental designs are
  2. Participant observation design (10-16)
  3. Nonequivalent control group design
  4. Time series design
  5. Both A and B

LOC: Strengths of Randomized Experimental Designs in Impact Evaluations

TIP: When Experiments Are Not Feasible

[LO 7]

COG [Application]

DIF [Hard]

  1. When a researcher is not able to randomly assign participants to experimental and control conditions, they may use a(n) (10-17)
  2. Nonequivalent control group design
  3. Participant observation design
  4. Time series design
  5. Counterfactual design

LOC: Strengths of Randomized Experimental Designs in Impact Evaluations

TIP: When Experiments Are Not Feasible

[LO 6]

COG [Application]

DIF [Medium]

  1. Policy research is a (10-19)
  2. Method rather than a process
  3. Information method to understand a process
  4. Process rather than a method
  5. Summary method that summarizes the process

LOC: Qualitative and Quantitative Methods

TIP: Increasing Demand for Evidence-Based Policy

[LO 7]

COG [Comprehension]

DIF [Easy]

  1. The direct impact on research participants (10-21)
  2. Lessens the attention that evaluation researchers have to give to human subject concerns
  3. Heightens the attention that evaluation researchers have to give to human subjects concerns
  4. Should not affect human subject concerns
  5. None of the above

LOC: Qualitative and Quantitative Methods

TIP: Ethics in Evaluation

[LO 7]

COG [Analysis]

DIF [Medium]

  1. How can qualitative methods add to quantitative evaluation research studies? (10-18)
  2. By adding more depth, detail and nuance
  3. By helping to add understanding of how social programs actually operate
  4. The more complex the social program the more value qualitative methods add to the evaluation process
  5. All of the above

LOC: When Experiments Are Not Feasible

TIP: Qualitative and Quantitative Methods

[LO 1]

COG [Comprehension]

DIF [Medium]

  1. The time series research conducted by Bye in 2008 (10-17)
  2. Examined drug abuse and juvenile delinquency between 1950-2000
  3. Examined per capita alcohol consumption and homicide rate data general between the 1950s and 2002
  4. Examined alcohol use and violence in western European countries
  5. None of the above

LOC: When Experiments Are Not Feasible

TIP: Case Study of a Time Series Design: Drinking and Homicide in Eastern Europe

[LO 1]

COG [Analysis]

DIF [Medium]

TRUE/FALSE

  1. Policy research is a method of data collection. (10-19)
  2. True
  3. False

LOC: Qualitative and Quantitative Methods

TIP: Increasing Demand for Evidence-Based Policy

[LO 1]

COG [Knowledge]

DIF [Easy]

  1. Evaluation research is conducted for a distinctive purpose: to investigate social programs. (10-2)
  2. True
  3. False

LOC: Evaluation and Policy Analysis

TIP: A Brief History of Evaluation Research

[LO 1]

COG [Knowledge]

DIF [Easy]

  1. One of the main initiatives that spawned the growth of evaluation research in the U.S. was the “War on Poverty” that was part of the Great Society legislation of the 1960s. (10-2)
  2. True
  3. False

LOC: Evaluation and Policy Analysis

TIP: A Brief History of Evaluation Research

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. The impact of program process on the cases processed is known as outcomes. (10-3)
  2. True
  3. False

LOC: Evaluation and Policy Analysis

TIP: A Brief History of Evaluation Research

[LO 1]

COG [Application]

DIF [Medium]

  1. The direct products of a program’s service delivery process are its outputs. (10-3)
  2. True
  3. False

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 2]

COG [Comprehension]

DIF [Easy]

  1. Evaluation research may be undertaken to identify ways to improve delivery of services. (10-5)
  2. True
  3. False

LOC: Evaluation Basics

TIP: Evaluation Alternatives

[LO 1]

COG [Application]

DIF [Medium]

  1. A needs assessment can attempt to discover the need for a new program in a community. (10-5)
  2. True
  3. False

LOC: Evaluation Alternatives

TIP: Do We Need the Program?

[LO 1]

COG [Analysis]

DIF [Easy]

  1. When a researcher attempts to discover if a program worked as intended, he is employing an impact evaluation. (10-8)
  2. True
  3. False

LOC: Evaluation Alternatives

TIP: Did the Program Work?

[LO 1]

COG [Evaluation]

DIF [Medium]

  1. The design used by D’Amico and Fromme was a true experimental design. (10-9)
  2. True
  3. False

LOC: Evaluation Alternatives

TIP: Case Study: The Risk Skills Training Program (RSTP) Compared With Drug Abuse Resistance Education-Abbreviated (DARE-A)

[LO 1]

COG [Evaluation]

DIF [Hard]

  1. An efficiency analysis compares a program’s effects with its costs. (10-10)
  2. True
  3. False

LOC: Evaluation Alternatives

TIP: Is the Program Worth It?

[LO 7]

COG [Analysis]

DIF [Medium]

  1. A cost-benefit analysis is a type of evaluation research that compares program costs to the economic value of program benefits. (10-10)
  2. True
  3. False

LOC: Evaluation Alternatives

TIP: Is the Program Worth It?

[LO 7]

COG [Analysis]

DIF [Medium]

  1. A black box evaluation is guided by a theory that specifies the process of a program and its effectiveness. (10-11)
  2. True
  3. False

LOC: Design Decisions

TIP: Black Box Evaluation or Program Theory

[LO 4]

COG [Comprehension]

DIF [Medium]

  1. Stakeholder approaches encourage researchers not to be responsive to participants. (10-12)
  2. True
  3. False

LOC: DesignDecisions

TIP: Researcher or Stakeholder Orientation

[LO 4]

COG [Application]

DIF [Medium]

  1. Data obtained from a true experimental design provide the best way to determine that the criteria have been met. (10-15)
  2. True
  3. False

LOC: Evaluation in Action

TIP: Strengths of Randomized Experimental Designs in Impact Evaluations

[LO 6]

COG [Comprehension]

DIF [Easy]

  1. It is always possible to use randomized experiments to evaluate the impacts of a program. (10-16)
  2. True
  3. False

LOC: Strengths of Randomized Experimental Designs in Impact Evaluations

TIP: When Experiments Are Not Feasible

[LO 6]

COG [Application]

DIF [Medium]

  1. A nonequivalent control group design is a quasi-experimental design is one in which there are experimental and comparison groups that are designed before the treatment occurs but are not created by random assignment. (10-17)
  2. True
  3. False

LOC: When Experiments Are Not Feasible

TIP: Case Study of Nonequivalent Control Group Design: Decreasing Injuries From Police Use of Force

[LO 6]

COG [Comprehension]

DIF [Medium]

  1. Policy research is a process rather than a method. (10-19)
  2. True
  3. False

LOC: Qualitative and Quantitative Methods

TIP: Increasing Demand for Evidence-Based Policy

[LO 2]

COG [Knowledge]

DIF [Easy]

  1. Qualitative methods do not mix well with quantitative methods. (10-18)
  2. True
  3. False

LOC: When Experiments Are Not Feasible

TIP: Qualitative and Quantitative Methods

[LO 6]

COG [Comprehension]

DIF [Medium]

  1. In general, quasi-experimental designs are much less powerful alternatives to true randomized experimental designs. (10-16)
  2. True
  3. False

LOC: Strengths of Randomized Experimental Designs in Impact Evaluations

TIP: When Experiments Are Not Feasible

[LO 5]

COG [Comprehension]

DIF [Medium]

  1. Problem-oriented policing strategies are increasingly used by urban jurisdictions to reduce crime in high-activity crime places. (10-13)
  2. True
  3. False

LOC: Evaluation in Action

TIP: Case Study: Problem-Oriented Policing in Violent Crime Areas—A Randomized Controlled Experiment

[LO 3]

COG [Analysis]

DIF [Hard]

ESSAY

  1. Why is a true experimental design the strongest method for conducting impact evaluations?

To evaluate the efficacy of these programs in reducing drinking and drug use, they randomly selected 150 students to participate in their study. Then students were randomly assigned to one of the three conditions: 75 students received RSTP programming, 75 students received the DARE-A programming, and another 150 students were randomly selected to participate but received no programming. The students received a pretest assessment and then posttest assessments, which took place at both two and six months after the programs.

The impacts (dependent variables) D’Amico and Fromme (2002) examined included positive and negative “alcohol expectancies” (the anticipated effects of drinking) as well as perceptions of peer risk taking and actual alcohol consumption. D’Amico and Fromme found that negative alcohol expectancies increased for the RSTP group in the posttest but not for the DARE-A group or the control group, while weekly drinking and “positive expectancies” for drinking outcomes actually increased for the DARE-A group and/or the control group by the six-month follow-up but not for the RSTP group (see Exhibit 10.3).

You should recognize the design used by D’Amico and Fromme as a true experimental design (see Chapter 6). This is the preferred method for maximizing internal validity—that is, for making sure your causal claims about program impact are justified. Cases are assigned randomly to one or more experimental treatment groups and to a control group so that there is no systematic difference between the groups at the outset. The goal is to achieve a fair, unbiased test of the program itself so that the judgment about the program’s impact is not influenced by differences between the types of people who are in the different groups. It can be a difficult goal to achieve, because the usual practice in social programs is to let people decide for themselves whether they want to enter a program and also to establish eligibility criteria that ensure that people who enter the program are different from those who do not (Boruch, 1997). In either case, a selection bias is introduced.

Impact analysis is an important undertaking that fully deserves the attention it has been given in government program funding requirements. However, you should realize that more rigorous evaluation designs are less likely to conclude that a program has the desired effect; as the standard of proof goes up, success is harder to demonstrate.

LOC: Evaluation Alternatives

TIP: Case Study: The Risk Skills Training Program (RSTP) Compared With Drug Abuse Resistance Education-Abbreviated (DARE-A)

[LO 6]

COG [Analysis]

DIF [Medium]

  1. Define and explain each of the types of evaluation research. When is each appropriate?

Evaluation research may be undertaken for a variety of reasons: for management and administrative purposes, to test hypotheses derived from theory, to identify ways to improve the delivery of services, or to decide whether to continue, cut, or modify a particular program. The goal of evaluation research, however, is primarily the same as the goal for all social science research: to design and implement a study that is objective and grounded in the rules of scientific methodology. These methods run the gamut of the methods we have discussed in this text. They can range from the strictly quantitative experimental and quasi-experimental designs to the qualitative methodologies of observation and intensive interviewing.

Evaluation projects can focus on several questions related to the operation of social programs and the impact they have:

  • Is the program needed? (evaluation of need)
  • Can the program be evaluated? (evaluability assessment)
  • How does the program operate? (evaluation of process)
  • What is the program’s impact? (evaluation of impact)
  • How efficient is the program? (evaluation of efficiency)

The specific methods used in an evaluation research project depend, in part, on which of these questions is being addressed.

Do We Need the Program? Is a new program needed or is an old one still required? Is there a need at all? A needs assessment attempts to answer these questions with systematic, credible evidence. The initial impetus for implementing programs to alleviate social problems and other societal ailments typically comes from a variety of sources, including advocacy groups, moral leaders, community advocates, and political figures. Before a program is designed and implemented, however, it is essential to obtain reliable information on the nature and the scope of the problem as well as the target population in need of the intervention. Evaluation researchers often contribute to these efforts by applying research tools to answer such questions as “What is the magnitude of this problem in this community?” “How many people in this community are in need of this program?” “What are the demographic characteristics of these people (e.g., age, gender, and race or ethnicity)?” and “Is the proposed program or intervention appropriate for this population?”

Needs assessment is a type of evaluation research that attempts to determine the needs of some population that might be met with a social program.

Needs assessment is not as easy as it sounds (Posavac & Carey, 1997). Whose definitions or perceptions should be used to shape our description of the level of need? How will we deal with ignorance of need? How can we understand the level of need without understanding the social context from which that level of need emerges? (Short answer to that one: We can’t!) What, after all, does need mean in the abstract? We won’t really understand what the level of need is until we develop plans for implementing a program in response to the identified needs.

Can the Program Be Evaluated? Evaluation research will be pointless if the program itself cannot be evaluated. Yes, some type of study is always possible, but a study conducted specifically to identify the effects of a particular program may not be possible within the available time and resources. So researchers may carry out an evaluability assessment to learn this in advance rather than expend time and effort on a fruitless project.

Evaluability assessment is a type of evaluation research conducted to determine whether it is feasible to evaluate a program’s effects within the available time and resources.

Knowledge about the program gleaned through the evaluability assessment can be used to refine evaluation plans. Because they are preliminary studies to check things out, evaluability assessments often rely on qualitative methods. Program managers and key staff may be interviewed in depth, or program sponsors may be asked about the importance they attach to different goals. These assessments also may have an “action research” aspect, because the researcher presents the findings to program managers and encourages changes in program operations.

Is the Program Working as Planned? What actually happens in a program? Once a program has been started, evaluators are often called on to document the extent to which implementation has taken place, whether the program is reaching the target individuals or groups, whether the program is actually operating as expected, and what resources are being expended in the conduct of the program. This is often called process evaluation or program monitoring. Rossi and Freeman (1989) define program monitoring as the systematic attempt by evaluation researchers to examine program coverage and delivery. Assessing program coverage consists of estimating the extent to which a program is reaching its intended target population; evaluating program delivery consists of measuring the degree of congruence between the plan for providing services and treatments and the ways they are actually provided.

Process evaluation (program monitoring) is evaluation research that investigates the process of service delivery.

Process evaluations are extremely important, primarily because there is no way to reliably determine whether the intended outcomes have occurred without being certain the program is working according to plan. For example, imagine you are responsible for determining whether an anti-bullying curriculum implemented in a school has been successful in decreasing the amount of bullying behavior by the students. You conduct a survey of the students both before and after the curriculum began and determine that rates of bullying have not significantly changed in the school since the curriculum started. After you write your report, however, you find out that, instead of being given in a five-day series of one-hour sessions as intended, the curriculum was actually crammed into a two-hour format delivered on a Friday afternoon. A process evaluation would have revealed this implementation problem. If a program has not been implemented as intended, there is obviously no need to ask whether it had the intended outcomes.

A process evaluation can take many forms. Because most government and private organizations inherently monitor their activities through such things as application forms, receipts, and stock inventories, it should be relatively easy to obtain quantitative data for monitoring the delivery of services. This information can be summarized to describe things such as the clients served and the services provided. In addition to this quantitative information, a process evaluation will also likely benefit from qualitative methodologies such as unstructured interviews with people using the service or program. Interviews can also be conducted with staff to illuminate what they perceive to be obstacles to their delivery of services.

Process evaluation can employ a wide range of indicators. Program coverage can be monitored through program records, participant surveys, community surveys, or number of utilizers versus dropouts and ineligibles. Service delivery can be monitored through service records completed by program staff, a management information system maintained by program administrators, or reports by program recipients (Rossi & Freeman, 1989).

Qualitative methods are often a key component of process evaluation studies because they can be used to understand internal program dynamics, even those that were not anticipated (Patton, 2002; Posavac & Carey, 1997). Qualitative researchers may develop detailed descriptions of how program participants engage with each other, how the program experience varies for different people, and how the program changes and evolves over time.

LOC: Evaluation Basics

TIP: Evaluation Alternatives

[LO 1]

COG [Knowledge]

DIF [Medium]

  1. What is the history of evaluation research? What is its current status?

For each project, an evaluation researcher must select a research design and a method of data collection that are useful for answering the particular research questions posed and appropriate for the particular program investigated.

You can see why we placed this chapter after most of the others in the text: When you review or plan evaluation research, you have to think about the research process as a whole and how different parts of that process can best be combined.

Although scientific research methods had been used prior to the 1950s (in fact, as early as the 1700s) to evaluate outcomes of particular social experiments and programs, it was not until the end of the 1950s that social research became immersed in the workings of government with the common goal of improving society. During the 1960s, the practice of evaluation research increased dramatically, not only in the United States but also around the world. One of the main initiatives that spawned this growth of evaluation research in the United States was the so-called War on Poverty that was part of the Great Society legislation of the 1960s. When the federal government began to take a major role in alleviating poverty and the social problems associated with it, such as delinquency and crime, the public wanted accountability for the tax dollars spent on such programs. Were these programs actually having their intended effects? Did the benefits outweigh the costs? During this time, the methods of social science were used like never before to evaluate this proliferation of new programs.

By the mid-1970s, evaluators were called on not only to assess the overall effectiveness of programs but also to determine whether programs were being implemented as intended and to provide feedback to help solve programming problems as well. As an indication of the growth of evaluation research during this time, several professional organizations emerged to assist in the dissemination of ideas from the burgeoning number of social scientists engaged in this type of research in the United States, along with similar organizations in other countries (Patton, 1997).

By the 1990s, the public wanted even more accountability. Unfortunately, clear answers were not readily available. Few social programs could provide hard data on results achieved and outcomes obtained. Of course, government bureaucrats had produced a wealth of data on other things, including exactly how funds in particular programs were spent, for whom this money was spent, and for how many. However, these data primarily measured whether government staff were following the rules and regulations, not whether the desired results were being achieved. Instead of being rewarded for making their programs produce the intended outcomes (e.g., more jobs, fewer delinquents), the bureaucracy of government had made it enough simply to do the required paperwork of program monitoring.

Today, professional evaluation researchers have realized that it is not enough simply to perform rigorous experiments to determine program efficacy; they must also be responsible for making sure their results can be understood and utilized by the practitioners (e.g., government officials, corporations, and nonprofit agencies) to make decisions about scrapping or modifying existing programs. For example, Patton (1997) coined the term utilization-focused evaluation as a way to emphasize that evaluations should be judged by their utility and actual use. That is, how people in the real world, especially those who are not researchers, apply evaluation findings and experience the evaluation process. In addition, there has been increased concern in the field regarding fiscal accountability, documenting the worth of social program expenditures in relation to their costs. Let’s get started with some evaluation basics.

Outcomes: The impact of the program process on the cases processed

LOC: Evaluation and Policy Analysis

TIP: A Brief History of Evaluation Research

[LO 2]

COG [Comprehension]

DIF [Medium]

  1. Diagram the evaluation research process as a feedback system. (10-3)

Exhibit 10.1

A Model of Evaluation

OUTCOMES

INPUTS

Feedback

OUTPUTS

Program Stakeholders

PROGRAM

PROCESS

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 3]

COG [Knowledge

DIF [Easy]

  1. Explain the concept of “black box” evaluation. What is the value of opening the black box? (10-11)

Black box evaluation: This type of evaluation occurs when an evaluation of program outcomes ignores, and does not identify, the process by which the program produced the effect.

LOC: Design Decisions

TIP: Black Box Evaluation or Program Theory

[LO 4]

COG [Comprehension]

DIF [Easy]

  1. What is the role of program theory and its value in evaluation research? (10-11)

Program theory is a descriptive or prescriptive model of how a program operates and produces its effects.

Theory-driven evaluation is a program evaluation that is guided by a theory that specifies the process by which the program has an effect.

A program theory specifies how the program is expected to operate and identifies which program elements are operational (Chen, 1990). In addition, a program theory specifies how a program is to produce its effects and so improves understanding of the relationship between the independent variable (the program) and the dependent variable (the outcome or outcomes).

LOC: Design Decisions

TIP: Black Box Evaluation or Program Theory

[LO 4]

COG [Comprehension]

DIF [Easy]

  1. Why is a true experimental design the strongest method for conducting impact evaluations? (10-14)

To determine which places would receive the problem-oriented strategies and which places would not, 56 neighborhoods were matched into 28 pairs with equal levels of crime, which were then randomly assigned to receive the problem-oriented policing treatment (experimental places). Remember that a key feature of true experimental designs is this random assignment. The places that were not selected from the flip in each pair did not receive the new policing strategies (control places). The design of this experimental evaluation is illustrated in Exhibit 10.4.

Random assignment ia procedure by which each experimental and control group subjects are placed in a group randomly.

In each of the experimental places, police officers from the Violent Crime Unit (VCU) of the Jersey City Police Department established networks consistent with problem-oriented policing. For example, community members were used as information sources to discuss the nature of the problems the community faced, the possible effectiveness of proposed responses, and the assessment of implemented responses. In most places, the VCU officers believed that the violence that distinguished these places from other areas of the city was closely related to the disorder of the place. Although specific tactics varied from place to place, most attempts to control violence in these places were actually targeted at the social disorder problems. For example, some tactics included cleaning up the environment of the place through aggressive order maintenance and making physical improvements such as securing vacant lots or removing trash from the street. The independent variable or treatment, then, was the use of problem-oriented policing, which comprised a number of specific tactics implemented by police officers to control the physical and social disorder at experimental violent places. In contrast, control places did not receive these problem-solving efforts; they received traditional policing strategies such as arbitrary patrol interventions and routine follow-up investigations by detectives. No problem-oriented strategies were employed.

LOC: Evaluation in Action

TIP: Case Study: Problem-Oriented Policing in Violent Crime Areas—A Randomized Controlled Experiment

[LO 6]

COG [Comprehension]

DIF [Medium]

  1. What is the difference between applied and basic research? (10-1)

Applied research is research that has an impact on policy and can be immediately utilized and applied.

In this chapter, you will read about a variety of social program evaluations as we introduce the evaluation research process, illustrate the different types of evaluation research, highlight alternative approaches, and review ethical concerns. You will learn in this chapter about attempts to determine the effectiveness of several programs and policies, including whether law enforcement agencies (LEAs) that utilize Conducted Energy Devices (such as Tasers®) decrease the risk of injuries to both officers and suspects. We will first provide you with a brief history of evaluation research. Then, after describing the different types of evaluation research, we will provide you with case studies that illustrate the various methodologies used to assess these different evaluation questions. We will conclude with a discussion of the differences between basic science and applied research and highlight the emerging demand for evidence-based policy.

LOC: Evaluation and Policy Analysis

TIP: Evaluation and Policy Analysis

[LO 7]

COG [Comprehension]

DIF [Easy

  1. What is a needs assessment? When is it used? (10-5)

Needs assessment is a type of evaluation research that attempts to determine the needs of some population that might be met with a social program.

Needs assessment is not as easy as it sounds (Posavac & Carey, 1997). Whose definitions or perceptions should be used to shape our description of the level of need? How will we deal with ignorance of need? How can we understand the level of need without understanding the social context from which that level of need emerges? (Short answer to that one: We can’t!) What, after all, does need mean in the abstract? We won’t really understand what the level of need is until we develop plans for implementing a program in response to the identified needs.

LOC: Evaluation Alternatives

TIP: Do We Need the Program?

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. How is an evaluability assessment different from a needs assessment? (10-6)

Evaluability assessment is a type of evaluation research conducted to determine whether it is feasible to evaluate a program’s effects within the available time and resources.

Knowledge about the program gleaned through the evaluability assessment can be used to refine evaluation plans. Because they are preliminary studies to check things out, evaluability assessments often rely on qualitative methods. Program managers and key staff may be interviewed in depth, or program sponsors may be asked about the importance they attach to different goals. These assessments also may have an “action research” aspect, because the researcher presents the findings to program managers and encourages changes in program operations.

A needs assessment attempts to answer these questions with systematic, credible evidence. The initial impetus for implementing programs to alleviate social problems and other societal ailments typically comes from a variety of sources, including advocacy groups, moral leaders, community advocates, and political figures. Before a program is designed and implemented, however, it is essential to obtain reliable information on the nature and the scope of the problem as well as the target population in need of the intervention. Evaluation researchers often contribute to these efforts by applying research tools to answer such questions as “What is the magnitude of this problem in this community?” “How many people in this community are in need of this program?” “What are the demographic characteristics of these people (e.g., age, gender, and race or ethnicity)?” and “Is the proposed program or intervention appropriate for this population?”

Needs assessment is a type of evaluation research that attempts to determine the needs of some population that might be met with a social program

LOC: Evaluation Alternatives

TIP: Can the Program Be Evaluated?

[LO 1]

COG [Analysis]

DIF Medium*]

  1. What are process evaluations? Why are process evaluations important?

Process evaluation (program monitoring) is evaluation research that investigates the process of service delivery.

Process evaluations are extremely important, primarily because there is no way to reliably determine whether the intended outcomes have occurred without being certain the program is working according to plan. For example, imagine you are responsible for determining whether an anti-bullying curriculum implemented in a school has been successful in decreasing the amount of bullying behavior by the students. You conduct a survey of the students both before and after the curriculum began and determine that rates of bullying have not significantly changed in the school since the curriculum started. After you write your report, however, you find out that, instead of being given in a five-day series of one-hour sessions as intended, the curriculum was actually crammed into a two-hour format delivered on a Friday afternoon. A process evaluation would have revealed this implementation problem. If a program has not been implemented as intended, there is obviously no need to ask whether it had the intended outcomes.

A process evaluation can take many forms. Because most government and private organizations inherently monitor their activities through such things as application forms, receipts, and stock inventories, it should be relatively easy to obtain quantitative data for monitoring the delivery of services. This information can be summarized to describe things such as the clients served and the services provided. In addition to this quantitative information, a process evaluation will also likely benefit from qualitative methodologies such as unstructured interviews with people using the service or program. Interviews can also be conducted with staff to illuminate what they perceive to be obstacles to their delivery of services.

Process evaluation can employ a wide range of indicators. Program coverage can be monitored through program records, participant surveys, community surveys, or number of utilizers versus dropouts and ineligibles. Service delivery can be monitored through service records completed by program staff, a management information system maintained by program administrators, or reports by program recipients (Rossi & Freeman, 1989).

Qualitative methods are often a key component of process evaluation studies because they can be used to understand internal program dynamics, even those that were not anticipated (Patton, 2002; Posavac & Carey, 1997). Qualitative researchers may develop detailed descriptions of how program participants engage with each other, how the program experience varies for different people, and how the program changes and evolves over time.

LOC: Evaluation Alternatives

TIP: Is the Program Working as Planned?

[LO 1]

COG [Comprehension]

DIF [Medium]

  1. How are stakeholder approaches different from social science approaches?

Stakeholders are individuals and groups who have some basis of concern with the program.

Can you see the difference between evaluation research and traditional social science research? Unlike explanatory social science research, evaluation research is not designed to test the implications of a social theory; the basic issue is often just “What is the program’s impact?” Process evaluation often uses qualitative methods, but unlike traditional exploratory research, the goal is not to create a broad theoretical explanation for what is discovered; instead, the question is, “How does the program do what it does?” Unlike social science research, the researchers cannot design evaluation studies simply in accord with the highest scientific standards and the most important research questions; instead, it is program stakeholders who set the agenda, but there is no sharp boundary between the two: In their attempt to explain how and why the program has an impact and whether the program is needed, evaluation researchers often bring social theories into their projects.

LOC: A Brief History of Evaluation Research

TIP: Evaluation Basics

[LO 1]

COG [Comprehension]

DIF [Easy]

  1. What is an integrated approach? What is the goal?

Integrative approaches are an orientation to evaluation research that expects researchers to respond to concerns of people involved with stakeholders as well as to the standards and goals of the social scientific community.

Ultimately, evaluation research takes place in a political context, in which program stakeholders may be competing or collaborating to increase program funding or to emphasize particular program goals. It is a political process that creates social programs, and it is a political process that determines whether these programs are evaluated and what is done with evaluation findings (Weiss, 1993). Developing supportive relations with stakeholder groups will increase the odds that political processes will not undermine evaluation practice.

LOC: Design Decisions

TIP: Researcher or Stakeholder Orientation

[LO 2]

COG [Comprehension]

DIF [Hard]

  1. What is policy research? What is its goal? (10-19)

The goal of policy research is to inform those who make policy about the possible alternative courses of action in response to some identified problem, their strengths and weaknesses, and their likely positive and negative effects. Reviewing the available evidence may lead the policy researcher to conclude that enough is known about the issues to develop recommendations without further research, but it is more likely that additional research will be needed using primary or secondary sources. Policies that have been evaluated with a methodologically rigorous design and have been proven effective are sometimes called evidence-based policies.

Policy research is a process in which research results are used to provide policy actors with recommendations for action that are based on empirical evidence and careful reasoning.

Evidence-based policy is a policy that has been evaluated with a methodologically rigorous design and has been proven to be effective.

LOC: Qualitative and Quantitative Methods

TIP: Increasing Demand for Evidence-Based Policy

[LO 1]

COG [Comprehension]

DIF [Medium]

  1. In what ways can evaluation research make a difference in peoples’ lives? (10-21)

Assessing needs, determining evaluability, and examining the process of treatment delivery have few special ethical dimensions. Cost-benefit analyses in themselves also raise few ethical concerns. It is when the focus is program impact that human subjects considerations multiply. What about assigning persons randomly to receive some social program or benefit? One justification given by evaluation researchers has to do with the scarcity of these resources. If not everyone in the population who is eligible for a program can receive it (due to resource limitations), what could be a fairer way to distribute the program benefits than through a lottery? Random assignment also seems like a reasonable way to allocate potential program benefits when a new program is being tested with only some members of the target recipient population. However, when an ongoing entitlement program is being evaluated and experimental subjects would normally be eligible for program participation, it may not be ethical simply to bar some potential participants from the programs. Instead, evaluation researchers may test alternative treatments or provide some alternative benefit while the treatment is being denied.

It is important to realize that it is costly to society and potentially harmful to participants to maintain ineffective programs. In the long run, at least, it may be more ethical to conduct an evaluation study than to let the status quo remain in place.

LOC: Qualitative and Quantitative Methods

TIP: Ethics in Evaluation

[LO 2]

COG [Evaluation]

DIF [Medium]

Document Information

Document Type:
DOCX
Chapter Number:
7
Created Date:
Aug 21, 2025
Chapter Name:
Chapter 7 Survey Research
Author:
Ronet D. Bachman

Connected Book

Criminology Research 4e | Test Bank by Ronet D. Bachman

By Ronet D. Bachman

Test Bank General
View Product →

$24.99

100% satisfaction guarantee

Buy Full Test Bank

Benefits

Immediately available after payment
Answers are available after payment
ZIP file includes all related files
Files are in Word format (DOCX)
Check the description to see the contents of each ZIP file
We do not share your information with any third party