What Governments Can Learn From Airbnb And the Sharing Economy


 in Fortune: “….Despite some regulators’ fears, the sharing economy may not result in the decline of regulation but rather in its opposite, providing a basis upon which society can develop more rational, ethical, and participatory models of regulation. But what regulation looks like, as well as who actually creates and enforce the regulation, is also bound to change.

There are three emerging models – peer regulation, self-regulatory organizations, and data-driven delegation – that promise a regulatory future for the sharing economy best aligned with society’s interests. In the adapted book excerpt that follows, I explain how the third of these approaches, of delegating enforcement of regulations to companies that store critical data on consumers, can help mitigate some of the biases Airbnb guests may face, and why this is a superior alternative to the “open data” approach of transferring consumer information to cities and state regulators.

Consider a different problem — of collecting hotel occupancy taxes from hundreds of thousands of Airbnb hosts rather than from a handful of corporate hotel chains. The delegation of tax collection to Airbnb, something a growing number of cities are experimenting with, has a number of advantages. It is likely to yield higher tax revenues and greater compliance than a system where hosts are required to register directly with the government, which is something occasional hosts seem reluctant to do. It also sidesteps privacy concerns resulting from mandates that digital platforms like Airbnb turn over detailed user data to the government. There is also significant opportunity for the platform to build credibility as it starts to take on quasi governmental roles like this.

There is yet another advantage, and the one I believe will be the most significant in the long-run. It asks a platform to leverage its data to ensure compliance with a set of laws in a manner geared towards delegating responsibility to the platform. You might say that the task in question here — computing tax owed, collecting, and remitting it—is technologically trivial. True. But I like this structure because of the potential it represents. It could be a precursor for much more exciting delegated possibilities.

For a couple of decades now, companies of different kinds have been mining the large sets of “data trails” customers provide through their digital interactions. This generates insights of business and social importance. One such effort we are all familiar with is credit card fraud detection. When an unusual pattern of activity is detected, you get a call from your bank’s security team. Sometimes your card is blocked temporarily. The enthusiasm of these digital security systems is sometimes a nuisance, but it stems from your credit card company using sophisticated machine learning techniques to identify patterns that prior experience has told it are associated with a stolen card. It saves billions of dollars in taxpayer and corporate funds by detecting and blocking fraudulent activity swiftly.

A more recent visible example of the power of mining large data sets of customer interaction came in 2008, when Google engineers announced that they could predict flu outbreaks using data collected from Google searches, and track the spread of flu outbreaks in real time, providing information that was well ahead of the information available using the Center for Disease Control’s (CDC) own tracking systems. The Google system’s performance deteriorated after a couple of years, but its impact on public perception of what might be possible using “big data” was immense.

It seems highly unlikely that such a system would have emerged if Google had been asked to hand over anonymized search data to the CDC. In fact, there would have probably been widespread public backlash to this on privacy grounds. Besides, the reason why this capability emerged organically from within Google is partly as a consequence of Google having one of the highest concentrations of computer science and machine learning talent in the world.

Similar approaches hold great promise as a regulatory approach for sharing economy platforms. Consider the issue of discriminatory practices. There has long been anecdotal evidence that some yellow cabs in New York discriminate against some nonwhite passengers. There have been similar concerns that such behavior may start to manifest on ridesharing platforms and in other peer-to-peer markets for accommodation and labor services.

For example, a 2014 study by Benjamin Edelman and Michael Luca of Harvard suggested that African American hosts might have lower pricing power than white hosts on Airbnb. While the study did not conclusively establish that the difference is due to guests discriminating against African American hosts, a follow-up study suggested that guests with “distinctively African American names” were less likely to receive favorable responses for their requests to Airbnb hosts. This research raises a red flag about the need for vigilance as the lines between personal and professional blur.

One solution would be to apply machine-learning techniques to be able to identify patterns associated with discriminatory behavior. No doubt, many platforms are already using such systems….(More)”

Power to the people: how cities can use digital technology to engage and empower citizens


Tom Saunders at NESTA: “You’re sat in city hall one day and you decide it would be a good idea to engage residents in whatever it is you’re working on – next year’s budget, for example, or the redevelopment of a run down shopping mall. How do you go about it?

In the past, you might have held resident meetings and exhibitions where people could view proposed designs or talk to city government employees. You can still do that today, but now there’s digital: apps, websites and social media. So you decide on a digital engagement strategy: you build a website or you run a social media campaign inviting feedback on your proposals. What happens next?

Two scenarios: 1) You get 50 responses, mostly from campaign groups and local political activists; or 2) you receive such a huge number of responses that you don’t know what to do with them. Besides which, you don’t have the power or budget to implement 90 per cent of the suggestions and neither do you have the time to tell people why their proposals will be ignored. The main outcome of your citizen engagement exercise seems to be that you have annoyed the very people you were trying to get buy in from. What went wrong?

Four tips for digital engagement

With all the apps and platforms out there, it’s hard to make sense of what is going on in the world of digital tools for citizen engagement. It seems there are three distinct activities that digital tools enable: delivering council services online – say applying for a parking permit; using citizen generated data to optimise city government processes and engaging citizens in democratic exercises. In Conneced Councils Nesta sets out what future models of online service delivery could look like. Here I want to focus on the ways that engaging citizens with digital technology can help city governments deliver services more efficiently and improve engagement in democratic processes.

  1. Resist the temptation to build an app…

  1. Think about what you want to engage citizens for…

Sometimes engagement is statutory: communities have to be shown new plans for their area. Beyond this, there are a number of activities that citizen engagement is useful for. When designing a citizen engagement exercise it may help to think which of the following you are trying to achieve (note: they aren’t mutually exclusive):

  • Better understanding of the facts

If you want to use digital technologies to collect more data about what is happening in your city, you can buy a large number of sensors and install them across the city, to track everything from people movements to how full bins are. A cheaper and possibly more efficient way for cities to do this might involve working with people to collect this data – making use of the smartphones that an increasing number of your residents carry around with them. Prominent examples of this included flood mapping in Jakarta using geolocated tweets and pothole mapping in Boston using a mobile app.

For developed world cities, the thought of outsourcing flood mapping to citizens might fill government employees with horror. But for cities in developing countries, these technologies present an opportunity, potentially, for them to leapfrog their peers – to reach a level of coverage now that would normally require decades of investment in infrastructure to achieve. This is currently a hypothetical situation: cities around the world are only just starting to pilot these ideas and technologies and it will take a number of years before we know how useful they are to city governments.

  • Generating better ideas and options

The examples above involve passive data collection. Moving beyond this to more active contributions, city governments can engage citizens to generate better ideas and options. There are numerous examples of this in urban planning – the use of Minecraft by the UN in Nairobi to collect and visualise ideas for the future development of the community, or the Carticipe platform in France, which residents can use to indicate changes they would like to see in their city on a map.

It’s all very well to create a digital suggestion box, but there is a lot of evidence that deliberation and debate lead to much better ideas. Platforms like BetterReykjavic include a debate function for any idea that is proposed. Based on feedback, the person who submitted the idea can then edit it, before putting it to a public vote – only then, if the proposal gets the required number of votes, is it sent to the city council for debate.

  • Better decision making

As well as enabling better decision making by giving city government employees, better data and better ideas, digital technologies can give the power to make decisions directly to citizens. This is best encapsulated by participatory budgeting – which involves allowing citizens to decide how a percentage of the city budget is spent. Participatory budgeting emerged in Brazil in the 1980s, but digital technologies help city governments reach a much larger audience. ‘Madame Mayor, I have an idea’ is a participatory budgeting process that lets citizens propose and vote on ideas for projects in Paris. Over 20,000 people have registered on the platform and the pilot phase of the project received over 5000 submissions.

  1. Remember that there’s a world beyond the internet…

  1. Pick the right question for the right crowd…

When we talk to city governments and local authorities, they express a number of fears about citizen engagement: Fear of relying on the public for the delivery of critical services, fear of being drowned in feedback and fear of not being inclusive – only engaging with those that are online and motivated. Hopefully, thinking through the issues discussed above may help alleviate some of these fears and make city government more enthusiastic about digital engagement….(More)

How Twitter gives scientists a window into human happiness and health


 at the Conversation: “Since its public launch 10 years ago, Twitter has been used as a social networking platform among friends, an instant messaging service for smartphone users and a promotional tool for corporations and politicians.

But it’s also been an invaluable source of data for researchers and scientists – like myself – who want to study how humans feel and function within complex social systems.

By analyzing tweets, we’ve been able to observe and collect data on the social interactions of millions of people “in the wild,” outside of controlled laboratory experiments.

It’s enabled us to develop tools for monitoring the collective emotions of large populations, find the happiest places in the United States and much more.

So how, exactly, did Twitter become such a unique resource for computational social scientists? And what has it allowed us to discover?

Twitter’s biggest gift to researchers

On July 15, 2006, Twittr (as it was then known) publicly launched as a “mobile service that helps groups of friends bounce random thoughts around with SMS.” The ability to send free 140-character group texts drove many early adopters (myself included) to use the platform.

With time, the number of users exploded: from 20 million in 2009 to 200 million in 2012 and 310 million today. Rather than communicating directly with friends, users would simply tell their followers how they felt, respond to news positively or negatively, or crack jokes.

For researchers, Twitter’s biggest gift has been the provision of large quantities of open data. Twitter was one of the first major social networks to provide data samples through something called Application Programming Interfaces (APIs), which enable researchers to query Twitter for specific types of tweets (e.g., tweets that contain certain words), as well as information on users.

This led to an explosion of research projects exploiting this data. Today, a Google Scholar search for “Twitter” produces six million hits, compared with five million for “Facebook.” The difference is especially striking given that Facebook has roughly five times as many users as Twitter (and is two years older).

Twitter’s generous data policy undoubtedly led to some excellent free publicity for the company, as interesting scientific studies got picked up by the mainstream media.

Studying happiness and health

With traditional census data slow and expensive to collect, open data feeds like Twitter have the potential to provide a real-time window to see changes in large populations.

The University of Vermont’s Computational Story Lab was founded in 2006 and studies problems across applied mathematics, sociology and physics. Since 2008, the Story Lab has collected billions of tweets through Twitter’s “Gardenhose” feed, an API that streams a random sample of 10 percent of all public tweets in real time.

I spent three years at the Computational Story Lab and was lucky to be a part of many interesting studies using this data. For example, we developed a hedonometer that measures the happiness of the Twittersphere in real time. By focusing on geolocated tweets sent from smartphones, we were able to map the happiest places in the United States. Perhaps unsurprisingly, we found Hawaii to be the happiest state and wine-growing Napa the happiest city for 2013.

A map of 13 million geolocated U.S. tweets from 2013, colored by happiness, with red indicating happiness and blue indicating sadness. PLOS ONE, Author provided

These studies had deeper applications: Correlating Twitter word usage with demographics helped us understand underlying socioeconomic patterns in cities. For example, we could link word usage with health factors like obesity, so we built a lexicocalorimeter to measure the “caloric content” of social media posts. Tweets from a particular region that mentioned high-calorie foods increased the “caloric content” of that region, while tweets that mentioned exercise activities decreased our metric. We found that this simple measure correlates with other health and well-being metrics. In other words, tweets were able to give us a snapshot, at a specific moment in time, of the overall health of a city or region.

Using the richness of Twitter data, we’ve also been able to see people’s daily movement patterns in unprecedented detail. Understanding human mobility patterns, in turn, has the capacity to transform disease modeling, opening up the new field of digital epidemiology….(More)”

How technology disrupted the truth


 in The Guardian: “Social media has swallowed the news – threatening the funding of public-interest reporting and ushering in an era when everyone has their own facts. But the consequences go far beyond journalism…

When a fact begins to resemble whatever you feel is true, it becomes very difficult for anyone to tell the difference between facts that are true and “facts” that are not. The leave campaign was well aware of this – and took full advantage, safe in the knowledge that the Advertising Standards Authority has no power to police political claims. A few days after the vote, Arron Banks, Ukip’s largest donor and the main funder of the Leave.EU campaign, told the Guardian that his side knew all along that facts would not win the day. “It was taking an American-style media approach,” said Banks. “What they said early on was ‘Facts don’t work’, and that’s it. The remain campaign featured fact, fact, fact, fact, fact. It just doesn’t work. You have got to connect with people emotionally. It’s the Trump success.”
It was little surprise that some people were shocked after the result to discover that Brexit might have serious consequences and few of the promised benefits. When “facts don’t work” and voters don’t trust the media, everyone believes in their own “truth” – and the results, as we have just seen, can be devastating.

How did we end up here? And how do we fix it?

Twenty-five years after the first website went online, it is clear that we are living through a period of dizzying transition. For 500 years after Gutenberg, the dominant form of information was the printed page: knowledge was primarily delivered in a fixed format, one that encouraged readers to believe in stable and settled truths.

Now, we are caught in a series of confusing battles between opposing forces: between truth and falsehood, fact and rumour, kindness and cruelty; between the few and the many, the connected and the alienated; between the open platform of the web as its architects envisioned it and the gated enclosures of Facebook and other social networks; between an informed public and a misguided mob.

What is common to these struggles – and what makes their resolution an urgent matter – is that they all involve the diminishing status of truth. This does not mean that there are no truths. It simply means, as this year has made very clear, that we cannot agree on what those truths are, and when there is no consensus about the truth and no way to achieve it, chaos soon follows.

Increasingly, what counts as a fact is merely a view that someone feels to be true – and technology has made it very easy for these “facts” to circulate with a speed and reach that was unimaginable in the Gutenberg era (or even a decade ago). A dubious story about Cameron and a pig appears in a tabloid one morning, and by noon, it has flown around the world on social media and turned up in trusted news sources everywhere. This may seem like a small matter, but its consequences are enormous.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true. “The Truth”, as Peter Chippindale and Chris Horrie wrote in Stick It Up Your Punter!, their history of the Sun newspaper, is a “bald statement which every newspaper prints at its peril”. There are usually several conflicting truths on any given subject, but in the era of the printing press, words on a page nailed things down, whether they turned out to be true or not. The information felt like the truth, at least until the next day brought another update or a correction, and we all shared a common set of facts.

This settled “truth” was usually handed down from above: an established truth, often fixed in place by an establishment. This arrangement was not without flaws: too much of the press often exhibited a bias towards the status quo and a deference to authority, and it was prohibitively difficult for ordinary people to challenge the power of the press. Now, people distrust much of what is presented as fact – particularly if the facts in question are uncomfortable, or out of sync with their own views – and while some of that distrust is misplaced, some of it is not.

In the digital age, it is easier than ever to publish false information, which is quickly shared and taken to be true – as we often see in emergency situations, when news is breaking in real time. To pick one example among many, during the November 2015 Paris terror attacks, rumours quickly spread on social media that the Louvre and Pompidou Centre had been hit, and that François Hollande had suffered a stroke. Trusted news organisations are needed to debunk such tall tales.

Sometimes rumours like these spread out of panic, sometimes out of malice, and sometimes deliberate manipulation, in which a corporation or regime pays people to convey their message. Whatever the motive, falsehoods and facts now spread the same way, through what academics call an “information cascade”. As the legal scholar and online-harassment expert Danielle Citron describes it, “people forward on what others think, even if the information is false, misleading or incomplete, because they think they have learned something valuable.” This cycle repeats itself, and before you know it, the cascade has unstoppable momentum. You share a friend’s post on Facebook, perhaps to show kinship or agreement or that you’re “in the know”, and thus you increase the visibility of their pot to others.
Algorithms such as the one that powers Facebook’s news feed are designed to give us more of what they think we want – which means that the version of the world we encounter every day in our own personal stream has been invisibly curated to reinforce our pre-existing beliefs. When Eli Pariser, the co-founder of Upworthy, coined the term “filter bubble” in 2011, he was talking about how the personalised web – and in particular Google’s personalised search function, which means that no two people’s Google searches are the same – means that we are less likely to be exposed to information that challenges us or broadens our worldview, and less likely to encounter facts that disprove false information that others have shared.

Pariser’s plea, at the time, was that those running social media platforms should ensure that “their algorithms prioritise countervailing views and news that’s important, not just the stuff that’s most popular or most self-validating”. But in less than five years, thanks to the incredible power of a few social platforms, the filter bubble that Pariser described has become much more extreme.

On the day after the EU referendum, in a Facebook post, the British internet activist and mySociety founder, Tom Steinberg, provided a vivid illustration of the power of the filter bubble – and the serious civic consequences for a world where information flows largely through social networks:

I am actively searching through Facebook for people celebrating the Brexit leave victory, but the filter bubble is SO strong, and extends SO far into things like Facebook’s custom search that I can’t find anyone who is happy *despite the fact that over half the country is clearly jubilant today* and despite the fact that I’m *actively* looking to hear what they are saying.

This echo-chamber problem is now SO severe and SO chronic that I can only beg any friends I have who actually work for Facebook and other major social media and technology to urgently tell their leaders that to not act on this problem now is tantamount to actively supporting and funding the tearing apart of the fabric of our societies … We’re getting countries where one half just doesn’t know anything at all about the other.

But asking technology companies to “do something” about the filter bubble presumes that this is a problem that can be easily fixed – rather than one baked into the very idea of social networks that are designed to give you what you and your friends want to see….(More)”

There aren’t any rules on how social scientists use private data. Here’s why we need them.


 at SSRC: “The politics of social science access to data are shifting rapidly in the United States as in other developed countries. It used to be that states were the most important source of data on their citizens, economy, and society. States needed to collect and aggregate large amounts of information for their own purposes. They gathered this directly—e.g., through censuses of individuals and firms—and also constructed relevant indicators. Sometimes state agencies helped to fund social science projects in data gathering, such as the National Science Foundation’s funding of the American National Election Survey over decades. While scholars such as James Scott and John Brewer disagreed about the benefits of state data gathering, they recognized the state’s primary role.

In this world, the politics of access to data were often the politics of engaging with the state. Sometimes the state was reluctant to provide information, either for ethical reasons (e.g. the privacy of its citizens) or self-interest. However, democratic states did typically provide access to standard statistical series and the like, and where they did not, scholars could bring pressure to bear on them. This led to well-understood rules about the common availability of standard data for many research questions and built the foundations for standard academic practices. It was relatively easy for scholars to criticize each other’s work when they were drawing on common sources. This had costs—scholars tended to ask the kinds of questions that readily available data allowed them to ask—but also significant benefits. In particular, it made research more easily reproducible.

We are now moving to a very different world. On the one hand, open data initiatives in government are making more data available than in the past (albeit often without much in the way of background resources or documentation).The new universe of private data is reshaping social science research in some ways that are still poorly understood. On the other, for many research purposes, large firms such as Google or Facebook (or even Apple) have much better data than the government. The new universe of private data is reshaping social science research in some ways that are still poorly understood. Here are some of the issues that we need to think about:…(More)”

Democracy Does Not Cause Growth: The Importance of Endogeneity Arguments


IADB Working Paper by JEL Codes:”This article challenges recent findings that democracy has sizable effects on economic growth. As extensive political science research indicates that economic turmoil is responsible for causing or facilitating many democratic transitions, the paper focuses on this endogeneity concern. Using a worldwide survey of 165 country-specific democracy experts conducted for this study, the paper separates democratic transitions into those occurring for reasons related to economic turmoil, here called endogenous, and those grounded in reasons more exogenous to economic growth. The behavior of economic growth following these more exogenous democratizations strongly indicates that democracy does not cause growth. Consequently, the common positive association between democracy and economic growth is driven by endogenous democratization episodes (i.e., due to faulty identification)….(More)”

Designing an Active, Healthier City


Meera Senthilingam in the New York Times: “Despite a firm reputation for being walkers, New Yorkers have an obesity epidemic on their hands. Lee Altman, a former employee of New York City’s Department of Design and Construction, explains it this way: “We did a very good job at designing physical activity out of our daily lives.”

According to the city’s health department, more than half of the city’s adult population is either overweight (34 percent) or obese (22 percent), and the convenience of their environment has contributed to this. “Everything is dependent on a car, elevator; you sit in front of a computer,” said Altman, “not moving around a lot.”

This is not just a New York phenomenon. Mass urbanization has caused populations the world over to reduce the amount of time they spend moving their bodies. But the root of the problem runs deep in a city’s infrastructure.

Safety, graffiti, proximity to a park, and even the appeal of stairwells all play roles in whether someone chooses to be active or not. But only recently have urban developers begun giving enough priority to these factors.

Planners in New York have now begun employing a method known as “active design” to solve the problem. The approach is part of a global movement to get urbanites onto their streets and enjoying their surroundings on foot, bike or public transport.

“We can impact public health and improve health outcomes through the way that we design,” said Altman, a former active design coordinator for New York City. She now lectures as an adjunct assistant professor inColumbia University’s urban design program.

“The communities that have the least access to well-maintained sidewalks and parks have the highest risk of obesity and chronic disease,” said Joanna Frank, executive director of the nonprofit Center for Active Design; her work focuses on creating guidelines and reports, so that developers and planners are aware, for example, that people have been “less likely to walk down streets, less likely to bike, if they didn’t feel safe, or if the infrastructure wasn’t complete, so you couldn’t get to your destination.”

Even adding items as straightforward as benches and lighting to a streetscape can greatly increase the likelihood of someone’s choosing to walk, she said.

This may seem obvious, but without evidence its importance could be overlooked. “We’ve now established that’s actually the case,” said Frank.

How can things change? According to Frank, four areas are critical: transportation, recreation, buildings and access to food….(More)”

Data as a Means, Not an End: A Brief Case Study


Tracie Neuhaus & Jarasa Kanok  in the Stanford Social Innovation Review: “In 2014, City Year—the well-known national education nonprofit that leverages young adults in national service to help students and schools succeed—was outgrowing the methods it used for collecting, managing, and using performance data. As the organization established its strategy for long-term impact, leaders identified a business problem: The current system for data collection and use would need to evolve to address the more-complex challenges the organization was undertaking. Staff throughout the organization were citing pain points one might expect, including onerous manual data collection, and long lag times to get much-needed data and reports on student attendance, grades, and academic and social-emotional assessments. After digging deeper, leaders realized they couldn’t fix the organization’s challenges with technology or improved methods without first addressing more fundamental issues. They saw City Year lacked a common “language” for the data it collected and used. Staff varied widely in their levels of data literacy, as did the scope of data-sharing agreements with the 27 urban school districts where City Year was working at the time. What’s more, its evaluation group had gradually become a default clearinghouse for a wide variety of service requests from across the organization that the group was neither designed nor staffed to address. The situation was much more complex than it appeared.

With significant technology roadmap decisions looming, City Year engaged with us to help it develop its data strategy. Together we came to realize that these symptoms were reflective of a single issue, one that exists in many organizations: City Year’s focus on data wasn’t targeted to address the very different kinds of decisions that each staff member—from the front office to the front lines—needed to make. …

Many of us in the social sector have probably seen elements of this dynamic. Many organizations create impact reports designed to satisfy external demands from donors, but these reports have little relevance to the operational or strategic choices the organizations face every day, much less address harder-to-measure, system-level outcomes. As a result, over time and in the face of constrained resources, measurement is relegated to a compliance activity, disconnected from identifying and collecting the information that directly enables individuals within the organization to drive impact. Gathering data becomes an end in itself, rather than a means of enabling ground-level work and learning how to improve the organization’s impact.

Overcoming this all-too-common “measurement drift” requires that we challenge the underlying orthodoxies that drive it and reorient measurement activities around one simple premise: Data should support better decision-making. This enables organizations to not only shed a significant burden of unproductive activity, but also drive themselves to new heights of performance.

In the case of City Year, leaders realized that to really take advantage of existing technology platforms, they needed a broader mindset shift….(More)”

Kids learn about anti-discrimination via online soccer game


Springwise: “As Euro 2016 captures the attention of soccer fanatics around the world, a new app is tapping into the popularity of the event, and using it bring about positive education. EduKicks is a new game for kids that teaches anti-discrimination through gaming and soccer.

IMG_0972

Launched earlier this week, the multiplayer game focuses on personal, social, and health education for children aged between 9-13. After downloading the app on their smartphone or tablet, users take turns spinning a wheel, and face either a movement card or an education card. The movement cards asks players to complete a soccer-related activity, such as tick-tocking with the insides of their feet. Education cards require them to answer a question. For example, the app might ask “How many women working in the football industry have experienced sexism?” and users choose between 22 percent, 66 percent, or 51 percent. Topics cover racism, religious discrimination, sexism, homophobia, disability, and more. The aim is to use the momentum and popularity of football to make learning more engaging and enjoyable….(More)”

Data at the Speed of Life


Marc Gunther at The Chronicle of Philanthropy: “Can pregnant women in Zambia be persuaded to deliver their babies in hospitals or clinics rather than at home? How much are villagers in Cambodia willing to pay for a simple latrine? What qualities predict success for a small-scale entrepreneur who advises farmers?

Governments, foundations, and nonprofits that want to help the world’s poor regularly face questions like these. Answers are elusive. While an estimated $135 billion in government aid and another $15 billion in charitable giving flow annually to developing countries, surprisingly few projects benefit from rigorous evaluations. Those that do get scrutinized in academic studies often don’t see the results for years, long after the projects have ended.

IDinsight puts data-driven research on speed. Its goal is to produce useful, low-cost research results fast enough that nonprofits can use it make midcourse corrections to their programs….

IDinsight calls this kind of research “decision-focused evaluation,” which sets it apart from traditional monitoring and evaluation (M&E) and academic research. M&E, experts say, is mostly about accountability and outputs — how many training sessions were held, how much food was distributed, and so on. Usually, it occurs after a program is complete. Academic studies are typically shaped by researchers’ desire to break new ground and publish on topics of broad interest. The IDinsight approach aims instead “for contemporaneous decision-making rather than for publication in the American Economic Review,” says Ruth Levine, who directs the global development program at the William and Flora Hewlett Foundation.

A decade ago, Ms. Levine and William Savedoff, a senior fellow at the Center for Global Development, wrote an influential paper entitled “When Will We Ever Learn? Improving Lives Through Impact Evaluation.” They lamented that an “absence of evidence” for the effectiveness of global development programs “not only wastes money but denies poor people crucial support to improve their lives.”

Since then, impact evaluation has come a “huge distance,” Ms. Levine says….

Actually, others are. Innovations for Poverty Action recently created the Goldilocks Initiative to do what it calls “right fit” evaluations leading to better policy and programs, according to Thoai Ngo, who leads the effort. Its first clients include GiveDirectly, which facilitates cash transfers to the extreme poor, and Splash, a water charity….All this focus on data has generated pushback. Many nonprofits don’t have the resources to do rigorous research, according to Debra Allcock Tyler, chief executive at Directory of Social Change, a British charity that provides training, data, and other resources for social enterprises.

All this focus on data has generated pushback. Many nonprofits don’t have the resources to do rigorous research, according to Debra Allcock Tyler, chief executive at Directory of Social Change, a British charity that provides training, data, and other resources for social enterprises.

“A great deal of the time, data is pointless,” Allcock Tyler said last year at a London seminar on data and nonprofits. “Very often it is dangerous and can be used against us, and sometimes it takes away precious resources from other things that we might more usefully do.”

A bigger problem may be that the accumulation of knowledge does not necessarily lead to better policies or practices.

“People often trust their experience more than a systematic review,” says Ms. Levine of the Hewlett Foundation. IDinsight’s Esther Wang agrees. “A lot of our frustration is looking at the development world and asking why are we not accountable for the money that we are spending,” she says. “That’s a waste that none of us really feels is justifiable.”…(More)”