The Evidence Generation team works with John Jay College faculty and staff to train students for careers in applied research who then help organizations in the public safety sector to ready themselves for rigorous evaluation. Before working with affiliated agencies, students are trained in applied and translatable evaluation skills. They apply these skills in teams with faculty and staff building the research capacities of participating agencies.
The process begins with discussions and meetings involving team members who observe agency operations, interview staff, and collect documents and other materials to assist in formulating an approach. The process is organized in seven steps. Steps 1 and 2 represent the “diagnostic phase,” while Steps 3 through 6 make up the “implementation phase.” Agencies completing the first six steps have the option of continuing to a seventh step, an extended opportunity for affiliated agencies to propose one or more special projects.
Step 1. PEvGen
The Evidence Generation initiative helps affiliated agencies develop the skills and resources to create a better evidence base. The first step in the process is to determine whether an agency already possesses key skills and resources. The team conducts this assessment with a tool developed by JohnJayREC, called the PEvGen, or “Protocol for Evidence Generation.”
PEvGen was inspired by the well-known program evaluation tool from Vanderbilt University called the Standardized Program Evaluation Protocol or SPEP. The SPEP was developed as a protocol for scoring youth-serving agencies on how well they use practices that have been shown to be effective in reducing recidivism and other individual behaviors through direct youth services. The Evidence Generation initiative developed the PEvGen tool as a checklist for assessing evaluation readiness. It is a systematic checklist for assessing the extent to which an agency has the tools and resources needed to generate high-quality evaluation results.
The PEvGen tool reviews the key elements necessary for an agency to participate in an evaluation, and it assigns values for the agency’s performance on these dimensions, resulting in an overall score. Working with agency staff, the Graduate Research Fellows of EvGen review agency documentation and conduct staff interviews to assess a program’s stated goals, its guiding principles, theory of change, routine operations such as intake and follow-up protocols, and its capacity to implement rigorous data collection. Each program or intervention strategy is scored on twelve major dimensions, and the EvGen team revisits these scores to assess the progress of their efforts.
Step 2. Documentation of Routine Practices
The second step is to document an agency’s routine practices. The team begins by visiting an affiliated program to examine documents, interview staff, and compile other information needed to document routine practices. The effort serves a variety of purposes:
- Forming a clear understanding of how the program or practice model works. In other words, what happens on a day-to-day basis?
- Encouraging the agency to articulate its practices and procedures in a detailed way.
- Allowing agency staff members to describe their experience working for the program — information not often found in brochures, websites, or agency mission statements.
- Helping staff to discover whether agency practices are different from stated routines and engaging the agency with more practical knowledge.
The documentation of routine practices plays an important role in guiding subsequent activities. Documenting routine practices can reveal the sort of problems that would undermine an actual collaboration between the agency and a team of outside researchers.
Step 3. Theory of Change
A theory of change, sometimes called a program theory, is a set of testable propositions about how a program is supposed to affect a set of conditions or behaviors. A theory of change describes the process by which change is produced, articulating how and why a set of activities is expected to affect participants. A properly developed theory of change should guide a program’s daily activities and provide a clear framework for evaluation. A useful theory includes the development of data collection routines and suggests measures that may demonstrate the effectiveness of an intervention. With an appropriate theory of change, an evaluation project is more likely to measure program components in a way that leads to strong conclusions about cause and effect.
Effective theories of change are based on knowledge of the research literature and best practices, with strategies designed to address a specific problem in a specific context. Theories of change should be developed by systematically organizing what is known about a particular problem and how a program is designed to solve the problem.
Four basic steps can be used to develop a theory of change:
- State the problem that needs to be addressed;
- Identify the program’s goals and objectives and how they address the problem;
- Specify what actions will be taken to achieve those goals and objectives; and
- Clarify the rationale for taking those actions.
Many organizations report that they have already developed a theory of change or something like it. In many cases, however, researchers find that an existing theory is more a statement of aspirations rather than a detailed articulation of cause and effect. The EvGen initiative helps affiliated agencies to develop theories of change compatible with evaluation research.
Step 4. Logic Model
A logic model is a detailed, visual depiction of a program’s underlying theory of change, illustrating how the activities of a specific program lead to certain outcomes. Logic models are tools for determining which components of a program are designed to achieve specific outcomes, and how each component fits into an overall program strategy. The content of a logic model depends on the purpose, context, and intended objectives of a program.
At a minimum, however, a logic model should include:
- Inputs (resources);
- Activities (services offered or strategies pursued);
- Outputs (immediate, measurable results from each activity);
- Outcomes (intermediate and long-term results of each activity); and
- Impact (the long-term and/or systemic change achieved).
When designing logic models, it is often helpful to begin with a series of “if/then” statements as a means of clarifying how the goals of the program can be achieved through its proposed activities. This can also serve as a means of designing activities that lead to outcomes the program can realistically achieve within the designated time. Logic models are most useful when they can be linked to specific evaluation measures.
Step 5. Measurement Matrix
Since the principal goal of the Evidence Generation initiative is to assist organizations in developing evidence to support their program activities, a review of existing measures is an important element of the process. Specific measures are linked to each agency’s logic model by a measurement matrix summarizing the measures and the data necessary to create them. In many cases, affiliated agencies already collect a large quantity of information, and much of it can be used to produce essential measures. The measurement matrix identifies these existing sources of information. Few agencies are already able to collect the type and amount of data necessary to create appropriate measures for all of their key program components. An effective measurement matrix identifies gaps used to discuss recommended strategies for data enhancement.
Each matrix includes several ratings that assess data reliability and importance. “Reliability” estimates whether an agency is capable of collecting accurate and consistent measures based on existing routines. “Importance” balances what can be learned from individual measures against how difficult it would be for an organization to collect reliable data. The measurement matrix also helps agencies to distinguish between process and outcome measures. Both types of measurement are important for evaluation research.
Step 6. Recommendations
After completing Steps 1 through 5, the Evidence Generation team crafts a specific set of recommendations. The recommendations reflected what may have been learned about the agency’s needs, together with suggestions for how the agency could proceed with building up its evaluation capacity. Recommendations are crafted with full knowledge of the constraints facing an agency and the most pressing items of interest to the agency. If an agency is in serious need of basic data collection, recommendations focus on that task. If an agency has a lot of sound data resources but needs to begin implementing a strategy for formal evaluation, the recommendations point that out. Recommendations draw on material from all previous steps, including the theory of change, logic model, routine practices document, and the measurement matrix.
Step 7. Special Projects
Once an affiliated agency has worked through the first 6 steps of the Evidence Generation process, the agency may be invited to propose special projects. Special projects have several benefits: 1) they allow the Evidence Generation initiative to work with each agency for longer periods of time at a natural pace, according to each agency’s expectations and needs for assistance; 2) they allow efforts to be spread among affiliated agencies more evenly and more strategically, and 3) they support assistance with tasks that are important for future evaluation efforts but that may not be directly related to the first 6 Steps in the process.
Implementation is Key
The purpose of the Evidence Generation process is to facilitate each affiliated agency’s pursuit of evidence-informed practice, and ultimately, to empower their consistent implementation of evidence-informed practices.








