To conduct a more rigorous test of the effects of CBVP, the OJJDP evaluation grant required researchers to measure general community attitudes about crime and public safety. The John Jay College study selected two cities in which to measure public opinions and attitudes: Brooklyn and Denver. The evaluation team worked with the Institute for Survey Research (ISR) at Temple University to measure attitudes and perceptions of violence among a probability sample of neighborhood residents in both cities. Researchers from ISR conducted two rounds of face-to-face surveys to ask respondents about their awareness of violence reduction efforts and their perceptions and concerns about crime and violence.
In each of the two cities, the ISR team conducted identical interview-based surveys in the CBVP program target area and a matched comparison area. Surveys were conducted in 2012 and 2014. Exploratory factor analyses extracted four factors that measured residents’ concerns about violent crime and another set of items was compiled into a cumulative index of the respondents’ awareness of violence reduction efforts in their neighborhoods and cities. A difference-in-difference analysis was used to test the main research question—did communities implementing CBVP strategies show more improvement on key measures when compared with similar communities not implementing CBVP?
The survey project began by mailing letters to the homes of all potential survey respondents identified in the initial household sampling frame. Households had to meet three criteria to be included in the study: (1) the household had to be located within a target or comparison area; (2) an adult resident of the household (age 18 years or older) was required to be present to complete the survey; and (3) the adult resident completing the survey had to be cognitively capable of understanding and responding to the survey questions. The letters explained these inclusion criteria and the purposes of the survey and the larger CBVP evaluation project. Recipients were instructed how to complete the survey online if they preferred not to be contacted by the research team. All households not completing the survey online were visited by a pair of ISR staff members who offered to screen the residents for eligibility. Households that agreed to participate and that met all three requirements were surveyed by the ISR staff immediately.
The ISR survey instrument was pre-tested for reliability and validity to create three to five latent indicators of a respondent’s perceptions and attitudes toward violence. All questions were designed to be close-ended and measured using Likert-type scales (agree, strongly agree, etc.). With a balanced design (one treatment site and one comparison site in each of the two cities), power analysis was used to design a sample that would be likely to detect a 10 percent change in attitudes/perception over time with 80 percent power. Preliminary analyses assumed that the two paired sites were independent of each other and no pooled estimates would be calculated. The most conservative sample size (N = 1,600 for the entire study) required 200 surveys to be administered in each survey site, in each city, at two different times (2012 and 2014).
The research team purchased address data from a commercial provider (Marketing Systems Group) to construct neighborhood samples of 200 households in each neighborhood using an address-based sampling frame. Comparison neighborhoods were matched to CBVP neighborhoods according to recent crime data and demographics. Each comparison community was selected to be demographically similar, but geographically distant, from the treatment neighborhoods to prevent “spill over” effects from the CBVP interventions.
The study was not based on a panel design—i.e. researchers did not survey the same set of residents in the first and second survey waves. The activities of the agencies implementing CBVP programs in each city were hypothesized to affect the entire community over the course of implementation. Thus, two independent resident samples separated by at least 24 months should be sufficient to detect changes in neighborhood concerns about crime and violence if the programs worked according to theory.
The first wave of surveys was completed early in 2012 in both Denver and New York, approximately 12 months after each site began to receive CBVP funds. Follow-up surveys were scheduled to occur 24 months later in both cities. The second survey wave was completed on schedule in New York, but post-test surveys were postponed seven months in Denver, in part because Denver was slower to implement its model than anticipated. The research team wanted to ensure that at least 24 months had passed with the program at full implementation before attempting to measure change in attitudes. Thus, second wave surveys in Denver were completed late in 2014.
Collecting only two waves of survey data with both waves following the implementation of CBVP activities was not the ideal method for measuring community-level change. The evaluation project, however, had little choice in the timing of the surveys. Funding for the evaluation coincided with funding for the demonstration sites and prevented the evaluation team from gathering baseline (pre-implementation) data. Collecting the first wave of survey data during the first year of the demonstration was the best option available given the realities of federal funding cycles.
The survey team from ISR collected all data using tablet computers and “CASES,” ISR’s computer-assisted, in-person interviewing software. Researchers worked in pairs to visit all sampled households in each neighborhood. Each survey began with the research team offering the household another version of the study’s information letter. The letter provided background information about the study as well as the procedures to be used in the survey. A screening and consent form was then read aloud to potential respondents to ensure their comprehension and consent before the survey began. All recruitment and data-collection procedures were reviewed and approved for human subject protections by the Institutional Review Boards (IRB) of the City University of New York as well as the IRB from Temple University.
Respondents in the first wave of the survey received a five dollar cash incentive to participate in the survey. The ISR field researchers, however, reported that this amount did not seem to be enough to incentivize survey participation, especially in Brooklyn where residents had a higher cost of living. The cash incentive was increased to $10 for the second wave of the survey and this resulted in improved recruitment rates.
All identifiable data were stored on handheld devices assigned to individual survey team members during the data collection period. To ensure the anonymity and confidentiality of participants, each device was password protected and included encryption software that protected all information about the sampled households as well as completed survey data. Only the survey team member assigned to work on a particular case had access to specific information once the survey was completed. All records were locked and unavailable to anyone other than the survey team. Research personnel informed all study participants of these confidentiality assurances during the consent process. When survey data were transferred from the field to the data collection team at Temple University, all personally identifiable information was maintained on encrypted, password protected files and stored on a secure server. Original data files were destroyed once the de-identified surveys results were validated and forwarded to the evaluation offices at John Jay College.
The intervention areas in Brooklyn and Denver were established by OJJDP and the CBVP grantees before the evaluation project began. Both cities selected areas known for their high rates of youth violence. The research team made every effort to ensure that the comparison areas in both cities were as closely matched to the CBVP intervention areas as possible.
The intervention area in Brooklyn was located in Crown Heights. During the study’s survey interviews, it was clear that many residents of the public housing community knew about the CBVP grantee (Save Our Streets), but some residents were confused about the goals and purposes of the program. Some even believed the program acronym (“SOS”) was affiliated with area gangs. Residents reported a strong police presence in the neighborhood, especially near the public housing buildings.
The comparison are in Brooklyn was part of the Bedford-Stuyvesant neighborhood (known locally as “Bed-Stuy”) and included the Marcy and Lafayette Towers public housing communities. Police presence was heavy. Officers from NYPD regularly walked the neighborhood blocks and the Marcy Projects. There was a mobile police station (a large, visible trailer) located next to the Lafayette Towers.
The neighborhood also included part of Brooklyn’s large Hassidic community. Data from the U.S. Census indicated that 106 Hassidic households resided within the boundaries of the comparison area. Due to their stark economic and social differences, and the relative insularity of Hassidic households compared with the predominant African-American population, Hassidic households were excluded from the study. The research sample was drawn from the remaining 73 percent of households within the sampled area.
The CBVP intervention area in Denver included a large number of homes that were vacant or being remodeled. “Neighborhood Watch” signs were scattered throughout the intervention area when the research team visited the site. In both the intervention and comparison areas of Denver, few residents expressed fear of gunfire and/or gangs. During the evaluation’s initial inquiries, a number of potential respondents asked, “Is there a lot of violence in this area?” Respondents tended to identify only isolated sections of the community as unsafe. Few respondents were able to recognize or comment on the Gang Reduction Initiative of Denver (GRID), the CBVP-funded violence intervention program in Denver.
The comparison area in Denver included DHA (Denver Housing Authority) communities that were difficult to navigate, particularly in the evenings because there was little to no lighting. The sampled housing units were spread across three separate neighborhoods that differed from one another. Specifically, Sun Valley had lower incomes compared to the other two, which had more college student residents and families.
For the first wave of the survey, all data collection was completed during January 2012 (intervention and comparison areas in both cities). The survey team collected data from 428 respondents in Denver and 402 in Brooklyn. With the exception of just five surveys completed online, all surveys were conducted in-person by the team from Temple University. More than 80 percent of the surveys in both cities were conducted in English. A larger number of Spanish surveys were conducted in the Brooklyn comparison area (24%) than in the intervention area (8%). In Denver, the percentage of Spanish surveys was similar in the comparison (18%) and intervention area (15%).
The overall response rate is calculated as the percentage of completed surveys among the total number of sampled addresses minus those deemed to be ineligible (e.g., vacant). The study’s 830 completed surveys in 2012 represented an 81 percent response rate for the Brooklyn intervention neighborhood, a 69 percent response rate for the Brooklyn comparison neighborhood, a 69 percent response rate for the Denver intervention area and a 79 percent response rate for Denver’s comparison neighborhood.
In 2014, the study completed 415 surveys in Brooklyn and 422 in Denver. Data collection in Brooklyn was completed in 29 days spanning the months of January and February. Data collection in Denver was completed in 35 days during August and September. Again, very few surveys were completed online (approximately 1%). In both cities, about 93 percent of the surveys were conducted in English. The surveys completed in 2014 represented response rates of 73 percent in the Brooklyn intervention neighborhood, 80 percent in the Brooklyn comparison neighborhood, 74 percent in the Denver intervention neighborhood, and 77 percent in the Denver comparison neighborhood.
The research team performed an exploratory factor analysis on the 2012 survey data to determine which items could be grouped together as single, statistical constructs. Before conducting the factor analysis, all survey items were coded to be in the same direction so that higher scores indicated more pro-social responses and lower scores indicated more negative responses of residents. Analyses of 25 attitude questions identified four multi-variable factors. Each factor incorporated several survey questions and all factors were correlated strongly enough to represent single concepts (a = 0.65 to 0.80).
The research team determined that survey items had successfully “loaded” on a particular factor when loading scores were 0.30 or greater. Of the original 36 items included in the factor analysis, 19 items were retained. The number of question items in each factor varied, ranging from three to eight items. The remaining 6 items were set aside for separate analyses.
The final factors described four distinct concepts:
1. Disinclination towards Gun Violence
2. Disinclination towards General Violence
3. Experience of Neighborhood Safety
4. Experience of Neighborhood Efficacy — i.e. the respondent’s experience of pro-social/helpful actions in the neighborhood.
In order to create comparable and interpretable scores for each individual, the research team calculated a mean response score for each resident on each factor. Only valid item scores were used in the calculation of each mean factor score and a respondent must have completed 60 percent of the items in a given factor to receive a factor score. In other words, if a resident responded to only six of eight items on a particular factor, his or her mean score for that factor was based on those six responses. If a resident responded to just four of eight items on a particular factor, he or she would not receive a score for that factor. Each factor score can be interpreted on a scale of one to four, with one being the least pro-social response and four being the most pro-social response to a given factor.
In addition to these factor scores, the study created an index measuring each resident’s exposure and knowledge of the CBVP-funded program activities in his/her city. Four questions in the survey measured program exposure and these items were added together to create the total program exposure score. Reliability was found to be moderate for this index (a = 0.53). A resident must have completed all of the exposure items to receive an index score. The index score can be interpreted on a scale of one to four, with four indicating the highest level of exposure to the CBVP program.
Finally, in order to examine potential differences in resident group factor scores, the research team calculated and compared a series of group means. This included overall mean factor and index scores for all residents in each city’s treatment and comparison areas. Researchers conducted a series of difference-in-difference (DiD) regressions to compare changes in factor scores for each surveyed area. This allowed the research team to compare two units (respondents, groups, areas, etc.) at two points in time when those units were known to have experienced different treatment conditions. The explanatory strength of DiD regression lies in its ability to capture both the effects of time and of the treatment on both units. It also cancels the effect of time when measuring a potential treatment effect. In other words, by subtracting the differences over time in untreated units (comparison) from treated units (program), any treatment effect should become apparent based on the assumption that time effects are the same for both types of units, given that both types of units are reasonably well matched on other characteristics.
The research team hypothesized a significant, positive effect for the factor and index scores in the DiD regression outcome for the treatment group. The analysis accounted for respondent age and length of time living in the neighborhood as part of the regression. Researchers believed that long-term residents were more likely than newer residents to be familiar with problems of neighborhood violence and to be more knowledgeable about CBVP-related efforts in the area.
Respondent characteristics varied slightly between survey areas and survey year. For example, 42 percent of respondents in the 2012 sample in the Brooklyn intervention area were between ages 19 and 29, but that number dropped to 33 percent in the 2014 sample. Across all sites and both waves of the survey, 29 percent of respondents were between 19 and 29 years old, 26 percent were between 30 and 39 years old, 17 percent were between 40 and 49 years old, and 39 percent were 50 or older. Female respondents generally outnumbered male respondents, ranging from 50 to 62 percent of the samples in both cities and both years. In all of the study areas except the Denver comparison neighborhood, Black/non-Hispanic respondents made up the majority of the sample. In the Denver comparison area, the sample was majority Latino/Hispanic. Most of the survey respondents had either finished high school or achieved a GED degree (ranging from 24% to 42%) or had attended at least some college (from 20% to 38%).
Many of the respondents had lived in their neighborhoods at least 10 years (44% overall), but this varied across sites and across survey waves so these were examined individually. Brooklyn respondents tended to have lived in their neighborhoods the longest (48% to 60% for more than 10 years in Brooklyn; 28% to 39% for more than 10 years in Denver). The same holds true for the length of time respondents had lived at their current address (32% to 51% for more than 10 years in Brooklyn; 17% to 27% for more than 10 years in Denver).
Almost half the respondents lived in a home with at least one person under the age of 18 (39% to 51%). Respondents were evenly distributed across income categories (less than $20,000 per year, $20,000 to $39,999 per year, and $40,000 or more) and household income varied only slightly from site to site and survey wave to survey wave. Few respondents owned their own homes, especially in Brooklyn where renting is the norm (4% to 7% owned in Brooklyn; 24% to 41% in Denver). Finally, the head of the household was equally likely to be married or in a domestic partnership versus never married, with some variation across survey site and wave.
The possible effects of CBVP were tested by comparing the factor and index scores for each survey area in 2012 and 2014. An increase in mean score between 2012 and 2014 would suggest a prosocial change in respondent opinions and perceptions. The results of the analysis, however, revealed little movement in scores over the course of the study period.
In the Brooklyn intervention area, no positive or pro-social changes were observed in any of the factor or index scores. Perceptions of neighborhood safety actually deteriorated significantly between the 2012 and 2014 surveys, suggesting that respondents were experiencing more fear of being out in their neighborhood in 2014 than in 2012.
In the Brooklyn comparison area, scores on respondents’ disinclinations toward general violence and gun violence both improved significantly. In other words, respondents in the Brooklyn comparison area (without CBVP) were significantly more prosocial in their attitudes toward violence in 2014 than in 2012.
These findings were contrary to the study hypotheses. Of course, the increase in respondent fear of violence in the intervention area may be an unanticipated byproduct of the CBVP efforts—perhaps the program’s efforts at community mobilization drew more attention to neighborhood violence.
The contrast with the improved fear climate in the comparison area is more difficult to explain. The research team did not find evidence of any new efforts to address violence in the comparison area that could explain the positive shift in resident attitudes. (New York City launched a new violence reduction program affiliated with the Cure Violence model in an area just south of the study’s comparison area, but that program opened in mid-2014, or just after the study’s second round of surveys.)
In the Denver intervention area, on the other hand, the disinclination towards general violence factor improved significantly from 2012 to 2014. In other words, residents in the 2014 sample were more prosocial (more anti-violence) than those in the 2012 sample.
Respondent scores on the exposure to anti-violence efforts index also increased significantly in the Denver intervention neighborhood, which could suggest that respondents noted more CBVP-related efforts over time. The scores in the Denver comparison area did not change significantly in either direction between 2012 and 2014.
The study’s DiD regressions showed that the relative changes in Brooklyn on both disinclination toward gun violence and general violence were statistically significant, but in an unwanted direction. That is, the change in the Brooklyn intervention area was less prosocial than in the comparison area. Significant positive changes in the Brooklyn comparison area may have swayed the results of this analysis. Contrary to expectations, residents in the area without CBVP demonstrated positive and statistically significant changes in their opinions over time, while residents of the area with CBVP did not demonstrate significant changes.
The DiD regressions in Denver were also not strong. The analysis failed to confirm the positive results suggested by the more straightforward comparison of survey responses over time. The coefficient for relative change in respondent exposure to anti-violence messaging did reach statistical significance and the change was positive, suggesting greater resident knowledge of the CBVP program in the affected neighborhood. But, there were no corresponding improvements in the relative change of attitudes and perceptions toward violence and neighborhood safety.
The CBVP evaluation’s pre-post resident surveys provided a more rigorous test of the program’s community effects than any of the outcome measures described in other chapters of the report. Surveys were conducted in two cities to detect neighborhood-level changes in residents’ perceptions of safety in the CBVP areas and to compare those changes with non-CBVP areas of each community as a means of assessing whether any effects might be associated with the CBVP demonstration. There were no consistent and significant changes in the survey data, however, which means the study failed to detect measurable effects of CBVP-sponsored activities at the community level.
Admittedly, the decision to survey probability samples of households about their perceptions of community violence set a very high bar for the CBVP demonstration. Changing broad community norms toward violence with public advocacy campaigns and interventions targeted on specific subsets of a population are very ambitious strategies. The CBVP-related programs in Brooklyn and Denver were attempting to move mountains and the CBVP evaluation gave them just 24 months to show success.
As evidenced by their selection for the CBVP program, the neighborhoods involved in this study had a long-standing history of violence and these new programs were not the first to attempt to change community conditions. It was exceedingly optimistic to expect fundamental attitudes about neighborhood safety to change in just two years, no matter how strong the programs were. As an individual working for one of these programs said, “It takes a lot longer to unlearn violence than it takes to learn violence.” The same holds true for expectations. It takes longer for people to begin to expect non-violence in their community after years of experiencing violence as “normal.”
It was encouraging to see a significant uptick in the program exposure index in Denver’s program area. This suggests that residents may have become more familiar with the program’s work over time. This may have occurred in Denver and not Brooklyn for a number of reasons. Perhaps the social and economic conditions in Denver presented a less profound challenge to the CBVP program than did conditions in Brooklyn. It is also possible that the extra seven months required to administer the second round of surveys in Denver may have been partly responsible for the gains in program awareness detected in that city. The Denver program had more time to get its name and message out to the wider community than did the program in Brooklyn.
An even more basic question relates to the use of public perceptions to judge the efficacy of violence-reduction programs. Perhaps a city’s effort to address violence actually highlights the existence of neighborhood violence. Some residents may not have been aware of the extent of the problem, or they may have avoided learning about it. The introduction of an effective, new program to stop violence with a strong public messaging component may lead some residents to become suddenly more aware of violence, independently of the actual levels of violence in the neighborhood. If so, when researchers returned to the neighborhood to administer a follow-up survey, people may have been more rather than less concerned about their own safety—even if the actual incidence of violence had not increased, or had even declined.
The John Jay College evaluation team worked with Temple University’s Institute for Social Research to conduct two waves of household surveys that measured the perceptions and opinions of residents in two CBVP communities. The research team asked probability samples of residents about their awareness of violence reduction programs, their fear of crime, and their attitudes about the uses of violence. In both Brooklyn and Denver, surveys were conducted in a neighborhood served by the CBVP grantee program and in a matched comparison neighborhood not served by a CBVP-funded program.
The evaluation failed to find consistent and significant improvements following implementation of the CBVP demonstration program. This study did not find positive changes in respondent perceptions of violence and public safety in either Brooklyn or Denver. On the other hand, the survey results did indicate statistically significant improvements in community awareness of violence reduction efforts in Denver. This could be considered at least promising.
The results suggest either that the CBVP programs did not have their desired effects on public safety, or that it takes more than two years to change basic perceptions of community violence. Certainly, it takes time for a community with a long history of violence to begin to trust the appearance of positive changes. Using household surveys to detect meaningful changes within two years of any policy or practice innovation is a difficult standard to meet.
Tomberg, Kathleen A. and Jeffrey A. Butts (2016). Street by Street: Cross-Site Evaluation of the OJJDP Community-Based Violence Prevention Demonstration Program. New York, NY: Research and Evaluation Center, John Jay College of Criminal Justice, City University of New York.