Over the past several years, the United States has spent hundreds of millions of dollars on crime and violence prevention programs in Central America, with few evaluations of the impact of this investment. But one hotly debated study highlights the challenges of measuring security assistance outcomes, as well as the need for a greater body of analysis about the efficacy of such initiatives.
Since 2008, the US Congress has appropriated more than $1 billion for the flagship US security assistance program in Central America: the Central America Regional Security Initiative (CARSI).
After nearly a decade, more than half a billion dollars worth of assistance has been delivered to Central America through CARSI. And in late 2015, Congress authorized a multi-year $750 million aid package for the region, of which $126 million is allocated to CARSI.
But there is little clarity about both the ultimate destination of CARSI funds as well as the impacts of the large number of programs the initiative supports in one of the world’s most violent regions.
In 2014, Vanderbilt University’s Latin American Public Opinion Project (LAPOP) published what became one of the most widely cited examinations of the outcomes of CARSI-funded anti-crime programs in Central America. The LAPOP study focused on evaluating the impact of a subset of CARSI programs implemented in the so-called “Northern Triangle” countries of Guatemala, El Salvador and Honduras, as well as Panama.
The researchers examined “community-based crime prevention” programs implemented by the US Agency for International Development (USAID) — the same branch of the State Department that commissioned LAPOP’s report.
Their “main finding” was that “in several key respects,” the CARSI-funded USAID programs “have been a success.”
The LAPOP study has been referenced by several well-respected non-governmental organizations, including InSight Crime. And it is featured on USAID’s website, where it is described as a “scientifically rigorous impact evaluation” that “is part of a broader effort to determine the effectiveness of community-based crime prevention, in contrast to the traditionally more common law enforcement, or mano dura (‘iron fist’), approach to addressing the widespread crime and violence permeating Central America.”
Late last year, however, the Washington, DC-based Center for Economic and Policy Research (CEPR) issued a rebuttal of the LAPOP study’s findings, arguing that LAPOP’s data “cannot support the conclusion that the areas subject to treatment in the CARSI programs showed better results than those areas that were not.”
The ensuing public debate between the two organizations over how to best measure the impact of US security assistance programs in Central America highlights several themes, including the lack of publicly available information about these programs, the difficulty of accurately assessing the impact of security aid, and the broad agreement that these issues deserve greater public scrutiny.
What Are USAID’s CARSI Programs?
For such a major security assistance package, there are surprisingly few details available to the public about CARSI and the programs it supports.
InSight Crime asked USAID to comment for this story, but the press office declined requests for an interview. In response to a request for details about the programs evaluated by LAPOP, the agency provided a written statement that reads, in part:
“USAID supports an integrated approach to violence prevention that targets youth who are at particular risk of being victims and perpetrators of violence in Central America … Through CARSI, we are working to foster strong and transparent justice systems, accountable, professional and community-oriented police forces and to build individual and community resilience to insecurity by support to targeted services to high-risk youth, youth outreach centers, municipal violence prevention committees, workforce development, and targeted small infrastructure improvements to rehabilitate public spaces.”
(The Congressional Research Service produced an overview of CARSI in late 2015 that describes USAID crime and violence prevention programs in similar terms.)
The design and implementation of CARSI programs involves an alphabet soup of offices from various US government agencies, including the Departments of Defense, Homeland Security, Treasury and Justice. But most of the program’s funding is managed by the State Department and USAID.
Mark Lopes, USAID’s deputy assistant administrator for Latin America and the Caribbean, testified before Congress in 2013 that crime and violence prevention programs like those examined in the LAPOP report constitute “the heart of USAID’s work” in Central America.
Yet despite these programs’ centrality in USAID’s anti-crime and violence strategy, virtually no systematic evaluations of their efficacy had been carried out prior to USAID commissioning LAPOP’s 2014 study.
The Woodrow Wilson Center for International Scholars issued a separate report on CARSI shortly after LAPOP, which concluded that “many CARSI-funded programs suffer from significant weaknesses.”
However, the Wilson Center did cite the LAPOP study as evidence of “the demonstrable impact [of] USAID supported community-based crime and violence prevention programs,” noting that this was “the only [CARSI] program area where a robust and comprehensive impact evaluation has been conducted with baselines established before program implementation.”
Since those reports were published more than two years ago, no further publicly available, comprehensive impact studies of USAID’s CARSI programs have been carried out by either government agencies or independent organizations.
Wilson Center report:
What LAPOP Found
LAPOP surveyed both the communities where USAID programs were implemented (the “treatment” communities) as well as the communities where they were not (the “control” communities) before, during and after implementation in order to determine whether the programs had an effect on citizens’ perceptions of security.
Researchers collected and analyzed information from thousands of quantitative surveys, hundreds of qualitative interviews, and dozens of focus groups in more than 120 neighborhoods over several years in order to arrive at their conclusions.
As stated above, LAPOP’s statistical analysis combined with an assessment of the qualitative evidence led the researchers to conclude that “in several key respects,” CARSI-funded USAID violence prevention programs in El Salvador, Honduras, Guatemala and Panama “have been a success.”
The report found that the USAID programs appeared to have had a positive impact on citizens’ perceptions of security in areas where they were implemented, compared to areas where they were not. Based on comparisons with evidence from the control areas, LAPOP concluded that residents of the treatment areas generally reported that security conditions either improved or declined less significantly than would be expected without the programs.
LAPOP’s original report:
CEPR’s Main Critiques
It is important to stress that CEPR does not maintain that the USAID programs are necessarily ineffective. Rather, the authors of CEPR’s original critique argue that LAPOP’s interpretation of the data “cannot support the conclusion that the areas subject to treatment in the CARSI programs showed better results than those areas that were not.”
Based on their own analysis of LAPOP’s data and methodology, CEPR found “major problems” with the original study, “namely, the nonrandomness of the selection of treatment versus control areas and how the differences in initial conditions, as well as differences in results between treatment and control areas, have been interpreted.”
According to CEPR, treatment communities where the USAID programs were implemented were “considerably different” from the control communities in that the former generally had higher crime rates than the latter prior to the treatment.
The CEPR authors write that this “suggests that the areas selected for treatment may not be sufficiently similar to the control areas to safely conclude that the interventions helped.”
The CEPR report posits that the seemingly positive effects of the USAID programs observed by LAPOP could instead be attributable to a “regression to the mean” among communities selected for treatment that were experiencing temporary spikes in crime.
The response from Nashville was swift. Within days, LAPOP published a reply to CEPR’s critique arguing that both organizations “came up with essentially the same results,” and that the main thrust of CEPR’s critique was “wrong” and “without substance.”
LAPOP said that “the treatment and control communities were indeed selected at random,” except in certain cases where the researchers employed statistical tools to compensate for the lack of randomness.
“In the case of Honduras, where our study was delayed by the 2009 coup, USAID had already selected the treatment communities by the time we were ready to begin, so random selection was no longer possible,” the LAPOP authors wrote, noting that they used a statistical technique “to select control communities that were as closely matched to the treatment communities as we could make them.”
The LAPOP response also notes that CEPR’s critique does not dispute the basic statistical model used in their original report. It does, however, state that the organizations “diverge on two issues: 1) the level of analysis; and 2) the standard by which to judge the improvements in the treatment area.”
Essentially, LAPOP argues that their own statistical model was aimed at evaluating the impact of the programs at the regional level, whereas CEPR’s model was aimed at determining the impact of USAID programs at the municipal level.
This accounts for part of the disparity in the two organizations’ assessments of the CARSI programs’ impact. But LAPOP also took issue with how CEPR measured the programs’ effects at the municipal level.
LAPOP argues that CEPR’s model made inaccurate assumptions about “normal” levels of crime and violence in certain communities that skewed how the programs’ effects were measured.
LAPOP response to CEPR:
CEPR Doubles Down
Shortly after LAPOP issued its response in September 2016, one of the authors of CEPR’s critique, Laura Jung, wrote a blog post for CEPR about her experience visiting a USAID-supported youth outreach center in Honduras.
Jung reported that a USAID staff member had told her that the center “do[es] not have monitoring and evaluation,” but that they “hope to have some in the next couple of years or so.”
“Put simply, there is no data that supports the claims of State Department officials or USAID that the interventions being implemented in Honduras, or in the Northern Triangle in general, are having a positive (or any) effect,” Jung concluded.
CEPR doubled down several months later, in January of this year, in response to LAPOP’s defense of its original report.
“The problems with the LAPOP study that we identified still stand, as does the validity of our conclusion,” author David Rosnick wrote. “LAPOP’s study cannot support the conclusion that intervention caused the areas subject to treatment in the CARSI programs to improve relative to those areas where no intervention took place.”
Rosnick reiterated CEPR’s earlier critiques of the study, and explained in further detail that his analysis had concluded that “treated neighborhoods are observed to be made worse off in municipalities where treatment neighborhoods were particularly healthy pretreatment — relative to corresponding control neighborhoods. Where treatment and control neighborhoods were initially most similar, treatment appears to have no discernible effect whatsoever.”
CEPR’s defense of its critique:
InSight Crime asked USAID to comment whether the agency agreed or disagreed with any elements of CEPR’s critique. In its written response, USAID stated that it is “aware” of the public debate between CEPR and LAPOP, but it did not directly address CEPR’s arguments.
The agency wrote that the LAPOP study “concluded that USAID’s Central America Regional Security Initiative (CARSI)-funded community-based violence prevention programs resulted in statistically significant reductions in crime victimization and increases in public perception of security in USAID treatment communities across the Northern Triangle countries and Panama, and we are using the information to improve upon and expand our programs.”
In addition, USAID noted that LAPOP’s “evaluation also has been critical in securing investments by Central American governments, private businesses, and local organizations to buy into, scale up, or replicate USAID’s community-based prevention activities.”
Why Does This Debate Matter?
CARSI is one of the most important security assistance packages that the United States provides to Central America. It is therefore crucial for “governments, private businesses, and local organizations” to understand the impacts of these programs when deciding whether or not to “buy into, scale up, or replicate USAID’s community-based prevention activities.”
USAID said in its statement to InSight Crime that the agency “takes its responsibility to the United States taxpayer seriously” and is “committed to accountability, transparency, and oversight of our programs.”
The agency noted that it uses “a full range of monitoring and evaluation tools, including survey data, performance indicators, analyses, studies, and external evaluations” that allow it “to establish baselines and track the pace and status of implementation, ensure that programs are meeting goals and delivering high-impact results, and provide the flexibility needed to accommodate new needs and realities.”
However, LAPOP’s study is virtually the only publicly accessible, comprehensive assessment of USAID crime and violence prevention programs in Central America.
SEE ALSO: Coverage of Security Policy
Setting aside the question of whether or not the LAPOP report’s conclusions are accurate, what remains clear is that there is a paucity of information available for public debate about the impacts of CARSI-funded USAID initiatives.
InSight Crime spoke with authors of both the LAPOP and CEPR reports, who expressed similar sentiments with regard to the necessity of increased monitoring and evaluation of the kinds of programs at issue here.
“I think we agree that it is important to conduct regular studies of programs that are designed to address problems, in this case crime and violence,” said LAPOP director Liz Zechmeister. “Crime and violence are particularly pernicious problems in Central America. But they’re also global problems. So determining ways to bolster the resilience of communities and create social capital is important.”
Alex Main, one of the author’s of CEPR’s original critique, described a frustration that the only comprehensive impact assessment of CARSI programs was commissioned by the very agency overseeing implementation of the initiatives.
“The ideal thing would be for there to be completely independent watchdogs … looking at aid programs,” he said. “Ideally, if there was a great deal more transparency around these aid programs … there are certainly groups — ours among others — that would have the possibility of looking more closely at these issues.”
In other words, the complex disputes about statistical methodology should not distract from the broad consensus that thorough monitoring and evaluation leads to better, more efficient implementation of security assistance programs.
The bottom line is that, for a variety of reasons, it is difficult to accurately evaluate these kinds of initiatives. But it is nonetheless vital that such evaluations be scrupulously carried out and subject to public debate.
Differing studies with differing approaches will obviously sometimes come to differing conclusions. But designing effective programs for countering crime and violence in the Americas is crucial. And public debates like the one between LAPOP and CEPR are essential to properly evaluating these efforts.