What do you think? Leave a respectful comment.

Michael Kremer, Esther Duflo and Abhijit Banerjee, Economic Sciences Laureates, The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel react after their Nobel lectures at Stockholm University in Stockholm, Sweden December 8, 2019. Photo by TT News Agency/Christine Olsson via Reuters

How the economics Nobel laureates’ methods could help fight poverty in the U.S.

Editor’s note: This story was first published on Econofact.

The Nobel Memorial prize in Economic Sciences awarded this year to Abhijit Banerjee, Esther Duflo, and Michael Kremer has focused attention on how an experimental-based approach, which they have popularized in the field of development economics, is helping to fight poverty in developing countries. But the use of randomized controlled trials (RCTs) is not limited to the developing world.

There is an ever-expanding list of their application in the United States, where the experimental methods are being used to tackle poverty and other social issues. Researchers are relying on RCTs to address questions such as how to improve the effectiveness of education, how to increase access to healthcare and ways to reduce encounters with the criminal justice system, among many others.

RCTs are being used to improve education; increase access to healthcare; and reduce encounters with the criminal justice system, among others.

Here are the facts:

Randomized evaluations can rigorously measure causal impact.

Randomized evaluations seek to determine the impact of programs and policies by comparing the outcomes of a group that receives the program against a group that does not. The evaluations use random assignment to determine who is in each group, so that the groups do not differ systematically at the start of the program. This allows researchers to attribute any differences in outcomes between the groups to the program or intervention rather than to another cause (see diagram). When feasible and ethical, randomized evaluations can be the most rigorous way to determine whether a program caused an impact.

Despite the perceived challenges of using this methodology, there are many naturally-arising opportunities for randomization — for example when a program is oversubscribed, when a program is rolling out in phases, or when a program already administers a lottery to determine participation. The use of random assignment can greatly shift our understanding of the impact of a program and challenge conventional wisdom.

Take company wellness programs. Many companies spend billions of dollars in workplace wellness programs with the goal of improving employee health and reducing health care costs. Prior observational (non randomized) studies on workplace wellness programs tended to find positive and large returns on investment.

However, in two recent randomized evaluations researchers found that wellness programs had no impact on clinical measures of health, healthcare spending or utilization, or employment. The evidence from both of these randomized evaluations suggests that prior observational studies may have suffered from selection bias; for example, people who chose to participate in the wellness programs may have already been healthier than those who did not (and hence measures of better health relative to non-participants were not due to participation in the wellness program per se).

Randomized evaluations are being used to study problems across policy areas in the United States.

The Abdul Latif Jameel Poverty Action Lab (J-PAL), co-founded by Banerjee and Duflo, launched a North America regional office in 2013 and joined the movement spearheaded by organizations like MDRC, Abt Associates, and Mathematica to use this approach in the United States. Applications cover a wide range of areas. In the health sector, researchers have used randomized evaluations to generate convincing evidence on the effects of Medicaid expansion in Oregon.

In the field of education, the methodology has been used to identify a variety of effective (and ineffective) approaches to helping students transition to college. Randomized evaluations have also been used to produce compelling evidence related to labor markets, crime, housing and homelessness, the environment and energy, household finance, and more (see here).

Randomized evaluations can guide investment decisions towards programs that are most effective and can ultimately result in the scale-up of these programs.

For instance, the methodology provided robust evidence that SAGA Education’s individualized and intensive math tutoring program helped male students from disadvantaged backgrounds in Chicago perform much better in high school.

Researchers found that students who were randomly selected to receive one hour of tutoring daily as part of their regular school schedule learned almost two extra years of math in a single year. In addition, participants’ average national percentile rank on 9th and 10th grade math exams increased by more than 20 percent and GPAs increased by 0.58 points on a 4.0 scale.

Since the publication of these results, the program has scaled up to serve thousands of students in Chicago, New York City, and Washington DC.

Alternatively, randomized evaluations can also provide rigorous evidence on which programs are not effective, enabling policymakers to divert resources to more effective programs.

Many schools across the U.S. are distributing technology to their students in hopes of achieving better educational outcomes. A study, focused on students in grades 6-10 in California, assessed the impact of a program that provided students with a home computer on their academic achievement. The results showed that computer use and ownership increased, but no effect was found on academic outcomes (there was no impact – positive or negative — on grades, standardized test scores, or attendance, among others).

While the program did increase student access to computers (which may be valuable on its own), the results from the randomized evaluation align with the global body of evidence that suggests technology alone is not a panacea for problems in education.

Researchers have also used randomized evaluations to improve the take-up of existing programs that benefit low-income families.

In the U.S., many individuals who are eligible for social and economic benefits do not claim these benefits. In partnership with the Internal Revenue Services (IRS), researchers tested different ways to encourage take-up of the Earned Income Tax Credit (EITC) in California and found that repeated notifications containing simple, highly relevant information increased take-up of this benefit. The study involved more than 35,000 EITC-eligible taxpayers who all received a standard reminder notice from the IRS in the mail.

During the study, individuals were randomized into subgroups that would be receiving different types of follow-up mailings (for instance, some received information about the amount of benefits they could expect; others about the cost of applying for benefits in terms of how long the forms would take to fill out; and some received mailings that contained messages intended to reduce any stigma associated with claiming the benefit). Simply receiving a reminder had an effect: Among those that received a follow-up mailing, 22 percent claimed their EITC benefit, amounting to a total of $4 million.

Receiving both the standard IRS notification and the follow-up mailing resulted in a 32 percent increase in take-up compared to receiving just the standard IRS notification alone. The highest response rate came from those who received less complex forms and information about the potential benefit.

Based on this study, the IRS revised the EITC reminder notice for all eligible applicants in the U.S., including the estimated seven million who fail to claim their benefit each year, by simplifying the content and including potential benefit amounts.

Randomized evaluations can help us understand why a certain program or policy is effective.

Regardless of criminal offense severity, failure to appear (FTA) in court automatically results in an arrest warrant in many jurisdictions. In an effort to reduce FTA, researchers partnered with New York City to test whether a redesigned court summons form and text message reminders reduced FTA.

During the study, individuals were randomly assigned to receive the redesigned summons form and different types of text messages (i.e., consequence messages, court date reminders). By testing various types of messages, researchers determined that the most salient and effective messages were those outlining the consequences of failure to appear in court and prompting individuals to plan for the court date. The researchers estimated that the redesigned summons form together with the most effective text messages prevented an estimated 20,800 arrest warrants in one year.

Do the results of one study apply in a different context?

Policy and program funders and developers often face the question of whether the results of a specific program will generalize, scale up, or apply to other contexts. Do different needs and demographics across jurisdictions prevent a single evidenced-based program from effectively serving a range of constituencies?

However, by using a generalizability framework, policymakers can determine whether an evidence-based program is appropriate to implement in a new context. Such a framework helps identify causal linkages and context-specific factors responsible for the impact observed in a program or policy. Through this assessment, policymakers can determine whether the right preconditions exist to apply a similar program in another context.

This case study illustrates one effort to use a generalizable framework by determining whether a hypothetical community health worker intervention in Philadelphia could also be effective at an outpatient primary care center in rural Indiana.

What this means

Poverty is a pressing and complex problem in the United States. While many programs and policies seek to address poverty, we often do not know which are effective. Randomized evaluations, a particularly rigorous type of impact evaluation, can show us which programs and policies work and help shed light on the barriers faced by people experiencing poverty.

This article was co-written with J-PAL staff members Yijin Yang and Erin Graeber.

Support for Making Sen$e Provided By:

Support for Making Sen$e Provided By:

The Latest