A recent issue of Wired contained an article that hit extremely close to home. It discussed how outcomes could be statistically proven by simple A/B tests or randomized controlled trials. One of the experiments sought to prove whether providing free textbooks to children in portions of Africa made a difference or not:
In 1993, after five years of grad school and low-wage postdoctoral research, Michael Kremer got a job as a professor of economics at MIT. With his new salary, he finally had enough money to fund a long-held desire: to return to Kenya’s Western Province, where he had lived for a year after college, teaching in a rural farming community. He wanted to see the place again, reconnect with his host family and other friends he’d made there.
When he arrived the next summer, he found out that one of those friends had begun working for an education nonprofit called ICS Africa. At the time, there was a campaign, spearheaded by the World Bank, to provide free textbooks throughout sub-Saharan Africa, on the assumption that this would boost test scores and keep children in school longer. ICS had tasked Kremer’s friend with identifying target schools for such a giveaway.
While chatting with his friend about this, Kremer began to wonder: How did ICS know the campaign would work? It made sense in theory—free textbooks should mean more kids read them, so more kids learn from them—but they had no evidence to back that up. On the spot, Kremer suggested a rigorous way to evaluate the program: Identify twice the number of qualifying schools as it had the money to support. Then randomly pick half of those schools to receive the textbooks, while the rest got none. By comparing outcomes between the two cohorts, they could gauge whether the textbooks were making a difference.
What Kremer was suggesting is a scientific technique that has long been considered the gold standard in medical research: the randomized controlled trial. At the time, though, such trials were used almost exclusively in medicine—and were conducted by large, well-funded institutions with the necessary infrastructure and staff to manage such an operation. A randomized controlled trial was certainly not the domain of a recent PhD, partnering with a tiny NGO, out in the chaos of the developing world.
But soon after Kremer returned to the US, he was startled to get a call from his friend. ICS was interested in pursuing his idea. Sensing a rare research opportunity, Kremer flew back to Kenya and set to work. By any measure it was a quixotic project. The farmers of western Kenya lived in poverty, exposed to drought, flood, famine, and disease. Lack of paved roads hampered travel; lack of phones impeded communication; lack of government records stymied data collection; lack of literate workers slowed student testing. For that matter, a lack of funds limited the scope. It was hardly an ideal laboratory for a multiyear controlled trial, and not exactly a prudent undertaking for a young professor with a publishing track record to build.
The study wound up taking four years, but eventually Kremer had a result: The free textbooks didn’t work. Standardized tests given to all students in the study showed no evidence of improvement on average. The disappointing conclusion launched ICS and Kremer on a quest to discover why the giveaway wasn’t helping students learn, and what programs might be a better investment.
As Kremer was realizing, the campaign for free textbooks was just one of countless development initiatives that spend money in a near-total absence of real-world data. Over the past 50 years, developed countries have spent something like $6.5 trillion on assistance to the developing world, most of those outlays guided by little more than macroeconomic theories, anecdotal evidence, and good intentions. But if it were possible to measure the effects of initiatives, governments and nonprofits could determine which programs actually made the biggest difference. Kremer began collaborating with other economists and NGOs in Kenya and India to test more strategies for bolstering health and education.
Were the results of the free textbook study as startling to you as it was to me? I wonder how many nonprofit missions – and their techniques/practices – have been proven to produce statistically valid results. Are we sometimes just doing what feels good or what we think will work instead of what can truly make a long-term difference?
Measurable Outcomes Directly Influence Donor Retention
I personally believe the savvy major donor of today and tomorrow will appreciate, if not demand, such measurable fundraising outcomes. Just imagine the power of being able to approximate how much of a real impact each individual dollar has. Not only could we report back to the donors the impact, but we could identify what will be lost if they do not renew in the coming year. Talk about influencing donor retention!
A Key Strategic Topic for Your Next Board Meeting
Now that your interest in truly valid and measurable results has been piqued, think about bringing your nonprofit board into the discussion. Most of them deal with such facts and reporting results every single day in their chosen vocation. Share with the board how your nonprofit measures its missional results. Have them try to poke holes in those measurements, as they would such measurements for their own organizations. It might be the best discussion you ever have with your board! Several of your board members might just step up with major gifts after they realize what real impact they can make.
Just think what a difference those same board members will make in the next major gift prospect meeting if they are knowledgeable about outcomes and perhaps have made such a gift themselves!
Comments
[News] Is your case statement hurting your fundraising efforts? |