Is digital feedback useful in impact evaluations? It depends.


Article by Lois Aryee and Sara Flanagan: “Rigorous impact evaluations are essential to determining program effectiveness. Yet, they are often time-intensive and costly, and may fail to provide the rapid feedback necessary for informing real-time decision-making and course corrections along the way that maximize programmatic impact. Capturing feedback that’s both quick and valuable can be a delicate balance.

In an ongoing impact evaluation we are conducting in Ghana, a country where smoking rates among adolescent girls are increasing with alarming health implications, we have been evaluating a social marketing campaign’s effectiveness at changing girls’ behavior and reducing smoking prevalence with support from the Bill & Melinda Gates Foundation. Although we’ve been taking a traditional approach to this impact evaluation using a year-long, in-person panel survey, we were interested in using digital feedback as a means to collect more timely data on the program’s reach and impact. To do this, we explored several rapid digital feedback approaches including social media, text message, and Interactive Voice Response (IVR) surveys to determine their ability to provide quicker, more actionable insights into the girls’ awareness of, engagement with, and feelings about the campaign. 

Digital channels seemed promising given our young, urban population of interest; however, collecting feedback this way comes with considerable trade-offs. Digital feedback poses risks to both equity and quality, potentially reducing the population we’re able to reach and the value of the information we’re able to gather. The truth is that context matters, and tailored approaches are critical when collecting feedback, just as they are when designing programs. Below are three lessons to consider when adopting digital feedback mechanisms into your impact evaluation design. 

Lesson 1: A high number of mobile connections does not mean the target population has access to mobile phones. ..

Lesson 2: High literacy rates and “official” languages do not mean most people are able to read and write easily in a particular language...

Lesson 3: Gathering data on taboo topics may benefit from a personal touch. …(More)”.

Why Funders Should Go Meta


Paper by Stuart Buck & Anna Harvey: “We don’t mean the former Facebook. Rather, philanthropies should prefer to fund meta-issues—i.e., research and evaluation, along with efforts to improve research quality. In many cases, it would be far more impactful than what they are doing now.

This is true at two levels.

First, suppose you want to support a certain cause–economic development in Africa, or criminal justice reform in the US, etc. You could spend millions or even billions on that cause.

But let’s go meta: a force multiplier would be funding high-quality research on what works on those issues. If you invest significantly in social and behavioral science research, you might find innumerable ways to improve on the existing status quo of donations.

Instead of only helping the existing nonprofits who seek to address economic development or criminal justice reform, you’d be helping to figure out what works and what doesn’t. The result could be a much better set of investments for all donors.

Perhaps some of your initial ideas end up not working, when exhaustively researched. At worst, that’s a temporary embarrassment, but it’s actually all for the better—now you and others know to avoid wasting more money on those ideas. Perhaps some of your favored policies are indeed good ideas (e.g., vaccination), but don’t have anywhere near enough take-up by the affected populations. Social and behavioral science research (as in the Social Science Research Council’s Mercury Project) could help find cost-effective ways to solve that problem…(More)”.

Unlocking the Potential of Open 990 Data


Article by Cinthia Schuman Ottinger & Jeff Williams: “As the movement to expand public use of nonprofit data collected by the Internal Revenue Service advances, it’s a good time to review how far the social sector has come and how much work remains to reach the full potential of this treasure trove…Organizations have employed open Form 990 data in numerous ways, including to:

  • Create new tools for donors.For instance, the Nonprofit Aid Visualizer, a partnership between Candid and Vanguard Charitable, uses open 990 data to find communities vulnerable to COVID-19, and help address both their immediate needs and long-term recovery. Another tool, COVID-19 Urgent Service Provider Support Tool, developed by the consulting firm BCT Partners, uses 990 data to direct donors to service providers that are close to communities most affected by COVID-19.
  • More efficiently prosecute charitable fraud. This includes a campaign by the New York Attorney General’s Office that recovered $1.7 million from sham charities and redirected funds to legitimate groups.
  • Generate groundbreaking findings on fundraising, volunteers, equity, and management. researcher at Texas Tech University, for example, explored more than a million e-filed 990s to overturn long-held assumptions about the role of cash in fundraising. He found that when nonprofits encourage noncash gifts as opposed to only cash contributions, financial contributions to those organizations increase over time.
  • Shed light on harmful practices that hurt the poor. A large-scale investigative analysis of nonprofit hospitals’ tax forms revealed that 45 percent of them sent a total of $2.7 billion in medical bills to patients whose incomes were likely low enough to qualify for free or discounted care. When this practice was publicly exposed, some hospitals reevaluated their practices and erased unpaid bills for qualifying patients. The expense of mining data like this previously made such research next to impossible.
  • Help donors make more informed giving decisions. In hopes of maximizing contributions to Ukrainian relief efforts, a record number of donors are turning to resources like Charity Navigator, which can now use open Form 990 data to evaluate and rate a large number of charities based on finances, governance, and other factors. At the same time, donors informed by open 990 data can seek more accountability from the organizations they support. For example, anti-corruption researchers scouring open 990 data and other records uncovered donations by Russian oligarchs aligned with President Putin. This pressured US nonprofits that accepted money from the oligarchs to disavow this funding…(More)”.

The New Moral Mathematics


Book Review by Kieran Setiya: “Space is big,” wrote Douglas Adams in The Hitchhiker’s Guide to the Galaxy (1979). “You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”

What we do now affects future people in dramatic ways—above all, whether they will exist at all.

Time is big, too—even if we just think on the timescale of a species. We’ve been around for approximately 300,000 years. There are now about 8 billion of us, roughly 15 percent of all humans who have ever lived. You may think that’s a lot, but it’s just peanuts to the future. If we survive for another million years—the longevity of a typical mammalian species—at even a tenth of our current population, there will be 8 trillion more of us. We’ll be outnumbered by future people on the scale of a thousand to one.

What we do now affects those future people in dramatic ways: whether they will exist at all and in what numbers; what values they embrace; what sort of planet they inherit; what sorts of lives they lead. It’s as if we’re trapped on a tiny island while our actions determine the habitability of a vast continent and the life prospects of the many who may, or may not, inhabit it. What an awful responsibility.

This is the perspective of the “longtermist,” for whom the history of human life so far stands to the future of humanity as a trip to the chemist’s stands to a mission to Mars.

Oxford philosophers William MacAskill and Toby Ord, both affiliated with the university’s Future of Humanity Institute, coined the word “longtermism” five years ago. Their outlook draws on utilitarian thinking about morality. According to utilitarianism—a moral theory developed by Jeremy Bentham and John Stuart Mill in the nineteenth century—we are morally required to maximize expected aggregate well-being, adding points for every moment of happiness, subtracting points for suffering, and discounting for probability. When you do this, you find that tiny chances of extinction swamp the moral mathematics. If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction—a one in a million chance of saving at least 8 trillion lives—you should do the latter, allowing a million people to die.

Now, as many have noted since its origin, utilitarianism is a radically counterintuitive moral view. It tells us that we cannot give more weight to our own interests or the interests of those we love than the interests of perfect strangers. We must sacrifice everything for the greater good. Worse, it tells us that we should do so by any effective means: if we can shave 0.0001 percent off the probability of human extinction by killing a million people, we should—so long as there are no other adverse effects.

But even if you think we are allowed to prioritize ourselves and those we love, and not allowed to violate the rights of some in order to help others, shouldn’t you still care about the fate of strangers, even those who do not yet exist? The moral mathematics of aggregate well-being may not be the whole of ethics, but isn’t it a vital part? It belongs to the domain of morality we call “altruism” or “charity.” When we ask what we should do to benefit others, we can’t ignore the disquieting fact that the others who occupy the future may vastly outnumber those who occupy the present, and that their very existence depends on us.

From this point of view, it’s an urgent question how what we do today will affect the further future—urgent especially when it comes to what Nick Bostrom, the philosopher who directs the Future of Humanity Institute, calls the “existential risk” of human extinction. This is the question MacAskill takes up in his new book, What We Owe the Future, a densely researched but surprisingly light read that ranges from omnicidal pandemics to our new AI overlords without ever becoming bleak…(More)”.

What Works? Developing a global evidence base for public engagement


Report by Reema Patel and Stephen Yeo: “…the Wellcome Trust commissioned OTT Consulting to recommend the best approach for enabling public engagement communities to share and gather evidence on public engagement practice globally, and in particular to assess the suitability of an approach adapted from the UK ‘What Works Centres’. This report is the output from that commission. It draws from a desk-based literature review, workshops in India, Peru and the UK, and a series of stakeholder interviews with international organisations.

The key themes that emerged from stakeholder interviews and workshops were that, in order for evidence about public engagement to help inform and shape public engagement practice, and for public engagement to be used and deployed effectively, there has to be an approach that can: understand the audiences, broaden out how ‘evidence’ is understood and generated, think strategically about how evidence affects and informs practice and understand the complexity of the system dynamics within which public engagement (and evidence about public engagement) operates….(More)”.

Data in Collective Impact: Focusing on What Matters


Article by Justin Piff: “One of the five conditions of collective impact, “shared measurement systems,” calls upon initiatives to identify and share key metrics of success that align partners toward a common vision. While the premise that data should guide shared decision-making is not unique to collective impact, its articulation 10 years ago as a necessary condition for collective impact catalyzed a focus on data use across the social sector. In the original article on collective impact in Stanford Social Innovation Review, the authors describe the benefits of using consistent metrics to identify patterns, make comparisons, promote learning, and hold actors accountable for success. While this vision for data collection remains relevant today, the field has developed a more nuanced understanding of how to make it a reality….

Here are four lessons from our work to help collective impact initiatives and their funders use data more effectively for social change.

1. Prioritize the Learning, Not the Data System

Those of us who are “data people” have espoused the benefits of shared data systems and common metrics too many times to recount. But a shared measurement system is only a means to an end, not an end in itself. Too often, new collective impact initiatives focus on creating the mythical, all-knowing data system—spending weeks, months, and even years researching or developing the perfect software that captures, aggregates, and computes data from multiple sectors. They let the perfect become the enemy of the good, as the pursuit of perfect data and technical precision inhibits meaningful action. And communities pay the price.

Using data to solve complex social problems requires more than a technical solution. Many communities in the US have more data than they know what to do with, yet they rarely spend time thinking about the data they actually need. Before building a data system, partners must focus on how they hope to use data in their work and identify the sources and types of data that can help them achieve their goals. Once those data are identified and collected, partners, residents, students, and others can work together to develop a shared understanding of what the data mean and move forward. In Connecticut, the Hartford Data Collaborative helps community agencies and leaders do just this. For example, it has matched programmatic data against Hartford Public Schools data and National Student Clearinghouse data to get a clear picture of postsecondary enrollment patterns across the community. The data also capture services provided to residents across multiple agencies and can be disaggregated by gender, race, and ethnicity to identify and address service gaps….(More)”.

Arts Data in the Public Sector: Strategies for local arts agencies


Report by Bloomberg Associates: “Cities are increasingly using data to help shape policy and identify service gaps, but data about arts and culture is often met with skepticism. Local arts agencies, the city and county entities at the forefront of understanding and serving their local creative communities, often face difficulties in identifying meaningful metrics that capture quality as well as quantity in this unique field. With the Covid-19 pandemic and intensifying demand for equity, the desire for reliable, longitudinal information will only increase in the coming years as municipalities with severely limited resources face critical decisions in their effort toward recovery.

So how can arts-minded cities leverage data to better serve grantees, promote equity in service delivery, and demonstrate the impact of arts and culture across a range of significant policy priorities, among other ambitions?

Produced by our Cultural Assets Management team, Arts Data in the Public Sector highlights the data practices of fifteen local arts agencies across the U.S. to capture a meaningful cross-section of constituencies, resources, and strategies. Through best practices and case studies, the Guide offers useful insights and practical resources that can assist and inspire local government arts funders and advocates as they work to establish more equitable and inclusive practices and to affirm the importance of arts and culture as a public service well into the future…(More)”.

Evidence-Based Policymaking: What Human Service Agencies Can Learn from Implementation Science and Integrated Data Systems


Paper by Sharon Zanti & M. Lori Thomas: “The evidence-based policymaking movement compels government leaders and agencies to rely on the best available research evidence to inform policy and program decisions, yet how to do this effectively remains a challenge. This paper demonstrates how the core concepts from two emerging fields—Implementation Science (IS) and Integrated Data Systems (IDS)—can help human service agencies and their partners realize the aims of the evidence-based policymaking movement. An IS lens can help agencies address the role of context when implementing evidence-based practices, complement other quality and process improvement efforts, simultaneously study implementation and effectiveness outcomes, and guide de-implementation of ineffective policies. The IDS approach offers governance frameworks to support ethical and legal data use, provides high-quality administrative data for in-house analyses, and allows for more time-sensitive analyses of pressing agency needs. Ultimately, IS and IDS can support human service agencies in more efficiently using government resources to deliver the best available programs and policies to the communities they serve. Although this paper focuses on examples within the United States context, key concepts and guidance are intended to be broadly applicable across geographies, given that IS, IDS, and the evidence-based policymaking movement are globally relevant….(More)”.

Roadmap to social impact: Your step-by-step guide to planning, measuring and communicating social impact


Roadmap developed by Ioana Ramia, Abigail Powell, Katrina Stratton, Claire Stokes, Ariella Meltzer, and Kristy Muir: “…is a step-by-step guide to support you and your organisation through the process of outcomes measurement and evaluation.

While it’s not the silver bullet for outcomes measurement or impact assessment, The Roadmap provides you with eight steps to understand the context in which you operate, who you engage with and the social issue you are addressing, how you address this social issue, what the intended changes are, how and when to measure those changes and how to communicate and use your findings to further improve you work and social impact.

It introduces some established techniques for data collection and analysis, but it is not a guide to research methods. A list of resources is also provided at the end of the guide, including tools for stakeholder engagement, developing a survey, interview questionnaire and data analysis.

The Roadmap is for everyone working towards the creation of positive social impact in Australia who wants to measure the change they are making for individuals, organisations and communities….(More)”.

Philanthropy Can Help Communities Weed Out Inequity in Automated Decision Making Tools


Article by Chris Kingsley and Stephen Plank: “Two very different stories illustrate the impact of sophisticated decision-making tools on individuals and communities. In one, the Los Angeles Police Department publicly abandoned a program that used data to target violent offenders after residents in some neighborhoods were stopped by police as many as 30 times per week. In the other, New York City deployed data to root out landlords who discriminated against tenants using housing vouchers.

The second story shows the potential of automated data tools to promote social good — even as the first illustrates their potential for great harm.

Tools like these — typically described broadly as artificial intelligence or somewhat more narrowly as predictive analytics, which incorporates more human decision making in the data collection process — increasingly influence and automate decisions that affect people’s lives. This includes which families are investigated by child protective services, where police deploy, whether loan officers extend credit, and which job applications a hiring manager receives.

How these tools are built, used, and governed will help shape the opportunities of everyday citizens, for good or ill.

Civil-rights advocates are right to worry about the harm such technology can do by hardpwiring bias into decision making. At the Annie E. Casey Foundation, where we fund and support data-focused efforts, we consulted with civil-rights groups, data scientists, government leaders, and family advocates to learn more about what needs to be done to weed out bias and inequities in automated decision-making tools — and recently produced a report about how to harness their potential to promote equity and social good.

Foundations and nonprofit organizations can play vital roles in ensuring equitable use of A.I. and other data technology. Here are four areas in which philanthropy can make a difference:

Support the development and use of transparent data tools. The public has a right to know how A.I. is being used to influence policy decisions, including whether those tools were independently validated and who is responsible for addressing concerns about how they work. Grant makers should avoid supporting private algorithms whose design and performance are shielded by trade-secrecy claims. Despite calls from advocates, some companies have declined to disclose details that would allow the public to assess their fairness….(More)”