Why This Company Is Crowdsourcing, Gamifying The World's Most Difficult Problems


FastCompany: “The biggest consultancy firms–the McKinseys and Janeses of the world–make many millions of dollars predicting the future and writing what-if reports for clients. This model is built on the idea that those companies know best–and that information and ideas should be handed down from on high.
But one consulting house, Wikistrat, is upending the model: Instead of using a stable of in-house analysts, the company crowdsources content and pays the crowd for its time. Wikistrat’s hundreds of analysts–primarily consultants, academics, journalists, and retired military personnel–are compensated for participating in what they call “crowdsourced simulations.” In other words, make money for brainstorming.

According to Joel Zamel, Wikistrat’s founder, approximately 850 experts in various fields rotate in and out of different simulations and project exercises for the company. While participating in a crowdsourced simulation, consultants are are paid a flat fee plus performance bonuses based on a gamification engine where experts compete to win extra cash. The company declined revealing what the fee scale is, but as of 2011 bonus money appears to be in the $10,000 range.
Zamel characterizes the company’s clients as a mix of government agencies worldwide and multinational corporations. The simulations are semi-anonymous for players; consultants don’t know who their paper is being written for or who the end consumer is, but clients know which of Wikistrat’s contestants are participating in the brainstorm exercise. Once an exercise is over, the discussions from the exercise are taken by full-time employees at Wikistrat and converted into proper reports for clients.
“We’ve developed a quite significant crowd network and a lot of functionality into the platform,” Zamel tells Fast Company. “It uses a gamification engine we created that incentivizes analysts by ranking them at different levels for the work they do on the platform. They are immediately rewarded through the engine, and we also track granular changes made in real time. This allows us to track analyst activity and encourages them to put time and energy into Wiki analysis.” Zamel says projects typically run between three and four weeks, with between 50 and 100 analysts working on a project for generally between five and 12 hours per week. Most of the analysts, he says, view this as a side income on top of their regular work at day jobs but some do much more: Zamel cited one PhD candidate in Australia working 70 hours a week on one project instead of 10 to 15 hours.
Much of Wikistrat’s output is related to current events. Although Zamel says the bulk of their reports are written for clients and not available for public consumption, Wikistrat does run frequent public simulations as a way of attracting publicity and recruiting talent for the organization. Their most recent crowdsourced project is called Myanmar Moving Forward and runs from November 25 to December 9. According to Wikistrat, they are asking their “Strategic community to map out Myanmar’s current political risk factor and possible futures (positive, negative, or mixed) for the new democracy in 2015. The simulation is designed to explore the current social, political, economic, and geopolitical threats to stability–i.e. its political risk–and to determine where the country is heading in terms of its social, political, economic, and geopolitical future.”…

Phone Apps Help Government, Others Counter Violence Against Women


NextGov: “Smart and mobile phones have helped authorities solve crimes from beatings that occurred during the London riots to the Boston Marathon bombing. A panel of experts gathered on Monday said the devices can also help reduce and combat rapes and other gender-based violence.
Smartphone apps and text messaging services proliferated in India following a sharp rise in reported gang rapes, including the brutal 2012 rape and murder of a 23-year-old medical student in Delhi, according to panelists at the Wilson Center event on gender-based violence and innovative technologies.
The apps fall into four main categories, said Alex Dehgan, chief data scientist at the United States Agency for International Development: apps that aid sexual assault and domestic violence victims, apps that empower women to fight back against gender-based violence, apps focused on advocacy and apps that crowdsource and map cases of sexual assault.
The final category of apps is largely built on the Ushahidi platform, which was developed to track reports of missing people following the 2010 Haiti earthquake.
One of the apps, Safecity, offers real-time alerts about sexual assaults across India to help women identify unsafe areas.
Similar apps have been launched in Egypt and Syria, Dehgan said. In lower-tech countries the systems often operate using text messages rather than smartphone apps so they’re more widely accessible.
One of the greatest impediments to using mobile technology to reduce gender violence is third world nations in which women often don’t have access to their own mobile or smartphones and rural areas in the U.S. and abroad in which there is limited service or broadband, Christopher Burns, USAID’s team leader for mobile access, said.
Burns suggested international policymakers should align plans for expanding broadband and mobile service with crowdsourced reports of gender violence.
“One suggestion for policy makers to focus on is to take a look at the crowd maps we’ve talked about today and see where there are greater incidences of gender-based violence and violence against women,” he said. “In all likelihood, those pockets probably don’t have the connectivity, don’t have the infrastructure [and] don’t have the capacity in place for survivors to benefit from those tools.”
One tool that’s been used in the U.S. is Circle of 6, an app for women on college campuses to automatically draw on friends when they think they’re in danger. The app allows women to pick six friends they can automatically text if they think they’re in a dangerous situation, asking them to call with an excuse for them to leave.
The app is designed to look like a game so it isn’t clear women are using their phones to seek help, said Nancy Schwartzman, executive director of Tech 4 Good, which developed the app.
Schwartzman has heard reports of gay men on college campuses using the app as well, she said. The military has been in contact with Tech 4 Good about developing a version of the app to combat sexual assault on military bases, she said.”

The Age of Democracy


Xavier Marquez at Abandoned Footnotes: “This is the age of democracy, ideologically speaking. As I noted in an earlier post, almost every state in the world mentions the word “democracy” or “democratic” in its constitutional documents today. But the public acknowledgment of the idea of democracy is not something that began just a few years ago; in fact, it goes back much further, all the way back to the nineteenth century in a surprising number of cases.
Here is a figure I’ve been wanting to make for a while that makes this point nicely (based on data graciously made available by the Comparative Constitutions Project). The figure shows all countries that have ever had some kind of identifiable constitutional document (broadly defined) that mentions the word “democracy” or “democratic” (in any context – new constitution, amendment, interim constitution, bill of rights, etc.), arranged from earliest to latest mention. Each symbol represents a “constitutional event” – a new constitution adopted, an amendment passed, a constitution suspended, etc. – and colored symbols indicate that the text associated with the constitutional event in question mentions the word “democracy” or “democratic”…
The earliest mentions of the word “democracy” or “democratic” in a constitutional document occurred in Switzerland and France in 1848, as far as I can tell.[1] Participatory Switzerland and revolutionary France look like obvious candidates for being the first countries to embrace the “democratic” self-description; yet the next set of countries to embrace this self-description (until the outbreak of WWI) might seem more surprising: they are all Latin American or Caribbean (Haiti), followed by countries in Eastern Europe (various bits and pieces of the Austro-Hungarian empire), Southern Europe (Portugal, Spain), Russia, and Cuba. Indeed, most “core” countries in the global system did not mention democracy in their constitutions until much later, if at all, despite many of them having long constitutional histories; even French constitutions after the fall of the Second Republic in 1851 did not mention “democracy” until after WWII. In other words, the idea of democracy as a value to be publicly affirmed seems to have caught on first not in the metropolis but in the periphery. Democracy is the post-imperial and post-revolutionary public value par excellence, asserted after national liberation (as in most of the countries that became independent after WWII) or revolutions against hated monarchs (e.g., Egypt 1956, Iran 1979, both of them the first mentions of democracy in these countries but not their first constitutions).

Today only 16 countries have ever failed to mention their “democratic” character in their constitutional documents (Australia, Brunei, Denmark, Japan, Jordan, Malaysia, Monaco, Nauru, Oman, Samoa, Saudi Arabia, Singapore, Tonga, the United Kingdom, the USA, and Vatican City).[2] And no country that has ever mentioned “democracy” in an earlier constitutional document fails to mention it in its current constitutional documents (though some countries in the 19th and early 20th centuries went back and forth – mentioning democracy in one constitution, not mentioning it in the next). Indeed, after WWII the first mention of democracy in constitutions tended to be contemporaneous with the first post-independence constitution of the country; and with time, even countries with old and settled constitutional traditions seem to be more and more likely to mention “democracy” or “democratic” in some form as amendments or bills of rights accumulate (e.g., Belgium in 2013, New Zealand in 1990, Canada in 1982, Finland in 1995). The probability of a new constitution mentioning “democracy” appears to be asymptotically approaching 1. To use the language of biology, the democratic “meme” has nearly achieved “fixation” in the population, despite short-term fluctuations, and despite the fact that there appears to be no particular correlation between a state calling itself democratic and actually being democratic, either today or in the past.[3]
Though the actual measured level of democracy around the world has trended upwards (with some ups and downs) over the last two centuries, I don’t think this is the reason why the idea of democracy has achieved near-universal recognition in public documents. Countries do not first become democratic and then call themselves democracies; if anything, most mentions of democracy seem to be rather aspirational, if not entirely cynical. (Though many constitutions that mention democracy were also produced by people who seem to have been genuinely committed to some such ideal, even if the regimes that eventually developed under these constitutions were not particularly democratic). What we see, instead, is a broad process in which earlier normative claims about the basis of authority – monarchical, imperial, etc. – get almost completely replaced, regardless of the country’s cultural context, by democratic claims, regardless of the latter’s effectiveness as an actual basis for authority or the existence of working mechanisms for participation or vertical accountability. (These democratic claims to authority also sometimes coexist in uneasy tension with other claims to authority based on divine revelation, ideological knowledge, or tradition, invented or otherwise; consider the Chinese constitution‘s claims about the “people’s democratic dictatorship” led by the CCP).
I thus suspect the conquest of ideological space by “democratic” language did not happen just because democratic claims to authority (especially in the absence of actual democracy) have proved more persuasive than other claims to authority. Rather, I think the same processes that resulted in the emergence of modern national communities – e.g. the rituals associated with nationalism, which tended to “sacralize” a particular kind of imagined community – led to the symbolic production of the nation not only as the proper object of government but also as its proper active agent (the people, actively ruling itself), regardless of whether or not “the people” had any ability to rule or even to exercise minimal control over the rulers.[4] There thus seems to have been a kind of co-evolution of symbols of nationality and symbols of democracy, helped along by the practice/ritual of drafting constitutions and approving them through plebiscites or other forms of mass politics, a ritual that already makes democratic assumptions about “social contracts.” The question is whether the symbolic politics of democracy eventually has any sort of impact on actual institutions. But more on this later….”

Government Digital Service: the best startup in Europe we can't invest in


Saul Klein in the Guardian: “Everyone is rightly excited about the wall of amazing tech-enabled startups being born in Europe and Israel, disrupting massive industries including media, marketing, fashion, retail, travel, finance and transportation. However, there’s one incredibly disruptive startup based in London that is going after one of the biggest markets of all, and is so opaque it is largely unknown in the world of business – and, much to my chagrin, it’s also impossible to invest in.
It’s not a private company, it wasn’t started by “conventional” tech entrepreneurs and the market (though huge) is decidedly unsexy.
Its name is the Government Digital Service (GDS) and it is disrupting the British public sector in an energetic, creative and effective way. In less than two years GDS has hired over 200 staff (including some of the UK’s top digital talent), shipped an award-winning service, and begun the long and arduous journey of completely revolutionising the way that 62 million citizens interact with more than 700 services from 24 government departments and their 331 agencies.
It’s a strange world we live in when the government is pioneering the way that large complex corporations reinvent themselves to not just massively reduce cost and complexity, but to deliver better and more responsive services to their customers and suppliers.
So what is it that GDS knows that every chairman and chief executive of a FTSE100 should know? Open innovation.
1. Open data
• Leads to radical and remarkable transparency like the amazing Transactions Explorer designed by Richard Sargeant and his team. I challenge any FTSE100 to deliver the same by December 2014, or even start to show basic public performance data – if not to the internet, at least to their shareholders and analysts.
• Leads to incredible and unpredictable innovation where public data is shared and brought together in new ways. In fact, the Data.gov.uk project is one of the world’s largest data sources of public data with over 9,000 data sets for anyone to use.
2. Open standards
• Deliver interoperability across devices and suppliers
• Provide freedom from lock-in to any one vendor
• Enable innovation from a level playing field of many companies, including cutting-edge startups
• The Standards Hub from the Cabinet Office is an example of how the government aims to achieve open standards
3. Cloud and open source software and services
• Use of open source, cloud and software-as-a-service solutions radically reduces cost, improves delivery and enables innovation
4. Open procurement
• In March 2011, the UK government set a target to award 25% of spend with third-party suppliers to SMEs by March 2015.”

Data Mining Reveals the Secret to Getting Good Answers


arXiv: “If you want a good answer, ask a decent question. That’s the startling conclusion to a study of online Q&As.

If you spend any time programming, you’ll probably have come across the question and answer site Stack Overflow. The site allows anybody to post a question related to programing and receive answers from the community. And it has been hugely successful. According to Alexa, the site is the 3rd most popular Q&A site in the world and 79th most popular website overall.
But this success has naturally led to a problem–the sheer number of questions and answers the site has to deal with. To help filter this information, users can rank both the questions and the answers, gaining a reputation for themselves as they contribute.
Nevertheless, Stack Overflow still struggles to weed out off topic and irrelevant questions and answers. This requires considerable input from experienced moderators. So an interesting question is whether it is possible to automate the process of weeding out the less useful question and answers as they are posted.
Today we get an answer of sorts thanks to the work of Yuan Yao at the State Key Laboratory for Novel Software Technology in China and a team of buddies who say they’ve developed an algorithm that does the job.
And they say their work reveals an interesting insight: if you want good answers, ask a decent question. That may sound like a truism, but these guys point out that there has been no evidence to support this insight, until now.
…But Yuan and co digged a little deeper. They looked at the correlation between well received questions and answers. And they discovered that these are strongly correlated.
A number of factors turn out to be important. These include the reputation of the person asking the question or answering it, the number of previous questions or answers they have posted, the popularity of their input in the recent past along with measurements like the length of the question and its title.
Put all this into a number cruncher and the system is able to predict the quality score of the question and its expected answers. That allows it to find the best questions and answers and indirectly the worst ones.
There are limitations to this approach, of course…In the meantime, users of Q&A sites can learn a significant lesson from this work. If you want good answers, first formulate a good question. That’s something that can take time and experience.
Perhaps the most interesting immediate application of this new work might be as a teaching tool to help with this learning process and to boost the quality of questions and answers in general.
See also: http://arxiv.org/abs/1311.6876: Want a Good Answer? Ask a Good Question First!”

Owning the city: New media and citizen engagement in urban design


Paper by Michiel de Lange and Martijn de Waal in First Monday : “In today’s cities our everyday lives are shaped by digital media technologies such as smart cards, surveillance cameras, quasi–intelligent systems, smartphones, social media, location–based services, wireless networks, and so on. These technologies are inextricably bound up with the city’s material form, social patterns, and mental experiences. As a consequence, the city has become a hybrid of the physical and the digital. This is perhaps most evident in the global north, although in emerging countries, like Indonesia and China mobile phones, wireless networks and CCTV cameras have also become a dominant feature of urban life (Castells, et al., 2004; Qiu, 2007, 2009; de Lange, 2010). What does this mean for urban life and culture? And what are the implications for urban design, a discipline that has hitherto largely been concerned with the city’s built form?
In this contribution we do three things. First we take a closer look at the notion of ‘smart cities’ often invoked in policy and design discourses about the role of new media in the city. In this vision, the city is mainly understood as a series of infrastructures that must be managed as efficiently as possible. However, critics note that these technological imaginaries of a personalized, efficient and friction–free urbanism ignore some of the basic tenets of what it means to live in cities (Crang and Graham, 2007).
Second, we want to fertilize the debates and controversies about smart cities by forwarding the notion of ‘ownership’ as a lens to zoom in on what we believe is the key question largely ignored in smart city visions: how to engage and empower citizens to act on complex collective urban problems? As is explained in more detail below, we use ‘ownership’ not to refer to an exclusive proprietorship but to an inclusive form of engagement, responsibility and stewardship. At stake is the issue how digital technologies shape the ways in which people in cities manage coexistence with strangers who are different and who often have conflicting interests, and at the same time form new collectives or publics around shared issues of concern (see, for instance, Jacobs, 1992; Graham and Marvin, 2001; Latour, 2005). ‘Ownership’ teases out a number of shifts that take place in the urban public domain characterized by tensions between individuals and collectives, between differences and similarities, and between conflict and collaboration.
Third, we discuss a number of ways in which the rise of urban media technologies affects the city’s built form. Much has been said and written about changing spatial patterns and social behaviors in the media city. Yet as the editors of this special issue note, less attention has been paid to the question how urban new media shape the built form. The notion of ownership allows us to figure the connection between technology and the city as more intricate than direct links of causality or correlation. Therefore, ownership in our view provides a starting point for urban design professionals and citizens to reconsider their own role in city making.
Questions about the role of digital media technologies in shaping the social fabric and built form of urban life are all the more urgent in the context of challenges posed by rapid urbanization, a worldwide financial crisis that hits particularly hard on the architectural sector, socio–cultural shifts in the relationship between professional and amateur, the status of expert knowledge, societies that face increasingly complex ‘wicked’ problems, and governments retreating from public services. When grounds are shifting, urban design professionals as well as citizens need to reconsider their own role in city making.”

Cinq expériences de démocratie 2.0


Le Monde: “Du 23 au 27 novembre, à Strasbourg, les participants au Forum mondial pour la démocratie examineront des initiatives de démocratie participative à l’oeuvre sur tous les continents. En voici quelques exemples. ( Lire aussi l’entretien : “Internet renforce le pouvoir de la société civile”)

  • EN FRANCE, LES ÉLECTEURS PASSENT À L’ÈRE NUMÉRIQUE

Depuis trois ans, les initiatives françaises de démocratie 2.0 se multiplient, avec pour objectif de stimuler la participation citoyenne aux instances démocratiques, qu’elles soient locales ou nationales. Dans la perspective des élections municipales de mars 2014, Questionnezvoselus.org propose ainsi aux internautes d’interroger les candidats à la mairie des 39 villes de France métropolitaine de plus de 100 000 habitants. Objectif ? Etablir la confiance entre les citoyens et leurs élus grâce à davantage de transparence, d’autonomisation et de responsabilité. La démarche rappelle celle de Voxe.org : lors de l’élection présidentielle de 2012, ce comparateur neutre et indépendant des programmes des candidats a enregistré un million de connexions. En complément, Laboxdesmunicipales.com propose des outils d’aide au vote, tandis que Candidat-et-citoyens.fr offre à ceux qui se présentent la possibilité d’associer des citoyens à la construction de leur programme.
Aux adeptes de la transparence, le collectif Democratieouverte.org propose d’interpeller les élus afin qu’ils affichent ouvertement leurs pratiques, et Regardscitoyens.org offre « un accès simplifié au fonctionnement de nos institutions démocratiques à partir des informations publiques »….

Informer, débattre et donner le pouvoir d’agir », tel est le slogan de Puzzled by Policy (PBP, « perplexe quant à la politique »), une plate-forme Internet lancée en octobre 2010 afin d’aider chacun à mieux comprendre les décisions politiques prises au niveau européen et à améliorer ainsi la qualité du débat public….

  • A PORTO ALEGRE,  UN WIKI RELIE HABITANTS ET ÉDILES

Cartographier le territoire et identifier les problèmes que rencontrent les habitants de la ville en utilisant un système « wiki » (c’est-à-dire un site Internet qui s’enrichit des contributions des internautes), telle est la vocation de Porto Alegre.cc.
Conçu pour donner de la visibilité aux causes défendues par les habitants, ce site s’inscrit dans le cadre de la plate-forme « wikicity » (Wikicidade.cc). Un concept dondé sur la méthode de l’intelligence collective, qui s’articule autour de quatre axes : culture de la citoyenneté, éthique de l’attention, responsabilité partagée et engagement civique….

  • EN FINLANDE,  CHACUN LÉGIFÈRE EN LIGNE

Depuis mars 2012, la Constitution finlandaise laisse à tout citoyen ayant atteint la majorité la possibilité d’inscrire des propositions de loi sur l’agenda parlementaire. Ces dernières sont examinées par les élus à condition de recevoir le soutien de 50 000 autres Finlandais (soit 1 % de la population).
Afin d’optimiser l’usage et l’impact de ce dispositif de participation citoyenne, l’ONG Open Ministry a lancé en octobre 2012 une plate-forme facilitant l’implication de tout un chacun. Participation en ligne, ateliers de travail ouverts ou tables rondes sont autant de techniques utilisées à cette fin….

  • AUX ETATS-UNIS, LA FINANCE PARTICIPATIVE GAGNE LES PROJETS PUBLICS

Citizinvestor.com propose aux citoyens américains de participer au financement d’infrastructures publiques. « L’administration n’a jamais assez d’argent pour financer tous les projets et services dont rêvent les citoyens », observent Tony De Sisto et Jordan Tyler Raynor, les cofondateurs du projet, conscients des choix difficiles opérés lors de l’allocation des budgets municipaux et de l’envie des habitants d’avoir leur voix dans ce choix….”

The Good Judgment Project: Harnessing the Wisdom of the Crowd to Forecast World Events


The Economist: “But then comes the challenge of generating real insight into forecasting accuracy. How can one compare forecasting ability?
The only reliable method is to conduct a forecasting tournament in which independent judges ask all participants to make the same forecasts in the same timeframes. And forecasts must be expressed numerically, so there can be no hiding behind vague verbiage. Words like “may” or “possible” can mean anything from probabilities as low as 0.001% to as high as 60% or 70%. But 80% always and only means 80%.
In the late 1980s one of us (Philip Tetlock) launched such a tournament. It involved 284 economists, political scientists, intelligence analysts and journalists and collected almost 28,000 predictions. The results were startling. The average expert did only slightly better than random guessing. Even more disconcerting, experts with the most inflated views of their own batting averages tended to attract the most media attention. Their more self-effacing colleagues, the ones we should be heeding, often don’t get on to our radar screens.
That project proved to be a pilot for a far more ambitious tournament currently sponsored by the Intelligence Advanced Research Projects Activity (IARPA), part of the American intelligence world. Over 5,000 forecasters have made more than 1m forecasts on more than 250 questions, from euro-zone exits to the Syrian civil war. Results are pouring in and they are revealing. We can discover who has better batting averages, not take it on faith; discover which methods of training promote accuracy, not just track the latest gurus and fads; and discover methods of distilling the wisdom of the crowd.
The big surprise has been the support for the unabashedly elitist “super-forecaster” hypothesis. The top 2% of forecasters in Year 1 showed that there is more than luck at play. If it were just luck, the “supers” would regress to the mean: yesterday’s champs would be today’s chumps. But they actually got better. When we randomly assigned “supers” into elite teams, they blew the lid off IARPA’s performance goals. They beat the unweighted average (wisdom-of-overall-crowd) by 65%; beat the best algorithms of four competitor institutions by 35-60%; and beat two prediction markets by 20-35%.
Over to you
To avoid slipping back to business as usual—believing we know things that we don’t—more tournaments in more fields are needed, and more forecasters. So we invite you, our readers, to join the 2014-15 round of the IARPA tournament. Current questions include: Will America and the EU reach a trade deal? Will Turkey get a new constitution? Will talks on North Korea’s nuclear programme resume? To volunteer, go to the tournament’s website at www.goodjudgmentproject.com. We predict with 80% confidence that at least 70% of you will enjoy it—and we are 90% confident that at least 50% of you will beat our dart-throwing chimps.”
See also https://web.archive.org/web/2013/http://www.iarpa.gov/Programs/ia/ACE/ace.html
 

6 Projects That Make Data More Accessible Win $100,000 Each From Gates


Chronicle of Philanthropy: “Six nonprofit projects that aim to combine multiple sets of data to help solve social problems have each won $100,000 grants from the Bill & Melinda Gates Foundation…The winners:
• Pushpa Aman Singh, who founded GuideStar India as an effort of the Civil Society Information Services India. GuideStar India is the most comprehensive database of India’s registered charities. It has profiles of more than 4,000 organizations, and Ms. Singh plans to expand that number and the types of information included.
• Development Initiatives, an international aid organization, to support its partnership with the Ugandan nonprofit Development Research and Training. Together, they are trying to help residents of two districts in Uganda identify a key problem the communities face and use existing data sets to build both online and offline tools to help tackle that challenge…
• H.V. Jagadish, at the University of Michigan, to develop a prototype that will merge sets of incompatible geographic data to make them comparable. Mr. Jagadish, a professor of electrical engineering and computer science, points to crime precincts and school districts as an example. “We want to understand the impact of education on crime, but the districts don’t quite overlap with the precincts,” he says. “This tool will address the lack of overlap.”
• Vijay Modi, at Columbia University, to work with government agencies and charities in Nigeria on a tool similar to Foursquare, the social network that allows people to share their location with friends. Mr. Modi, a mechanical-engineering professor and faculty member of the university’s Earth Institute, envisions a tool that will help people find important resources more easily…
• Gisli Olafsson and his team at NetHope, a network of aid organizations. The group is building a tool to help humanitarian charities share their data more widely and in real time—potentially saving more lives during disasters…
• Development Gateway, a nonprofit that assists international development charities with technology, and GroundTruth Initiative, a nonprofit that helps residents of communities learn mapping and media skills. The two groups want to give people living in the slums of Nairobi, Kenya, more detailed information about local schools…”

Crisis response needs to be a science, not an art


Jimmy Whitworth in the Financial Times:”…It is an imperative to offer shelter, nutrition, sanitation and medical care to those suddenly bereft of it. Without aid, humanitarian crises would cause still greater suffering. Yet admiration for the agencies that deliver relief should not blind us to the need to ensure that it is well delivered. Humanitarian responses must be founded on good evidence.
The evidence base, unfortunately, is weak. We know that storms, earthquakes and conflicts have devastating consequences for health and wellbeing, and that not responding is not an option, but we know surprisingly little about how best to go about it. Not only is evidence-based practice rare in humanitarian relief operations, it is often impossible.
Questions about how best to deliver clean water or adequate shelter, and even about which health needs should be prioritised as the most pressing, have often been barely researched. Indeed, the evidence gap is so great that the Humanitarian Practice Network has highlighted a “dire lack of credible data to help us understand just how much populations in crisis suffer, and to what extent relief operations are able to relieve that suffering”. No wonder aid responses are often characterised as messy.
Good practice often rests on past practice rather than research. The Bible of humanitarian relief is a document called the Sphere handbook, an important initiative to set minimum standards for provision of health, nutrition, sanitation and shelter. Yet analysis of the 2004 handbook has revealed that just 13 per cent of its 346 standards were supported by good evidence of relevance to health. The handbook, for example, recommended that refugee camps should prioritise measles vaccination – a worthwhile goal, but not one that should clearly be favoured over control of other infectious diseases.

Also under-researched is the question of how best to provide types of relief that everybody agrees meet essential needs. Access to clean water is a clear priority for almost all populations in crisis but little is understood about how this is most efficiently delivered. Is it best to ship bottled water to stricken areas? Are tankers of clean water more effective? Or can water purification tablets do the job? The summer floods in northern India made it clear that there is little good evidence one way or another.

Adequate shelter, too, is a human essential in all but the most benign environments but, once again, the evidence base about how best to provide it is limited. There is a school of thought that building transitional shelter from locally available materials is better in the long run than housing people under tents, tarpaulins and plastic, which if accurate would have far-reaching consequences for standard practice. But too little research has been done…
Researchers also face significant challenges to building a better evidence base. They can struggle to secure access to disaster zones when getting relief in is the priority. The timescales involved in applying for funding and ethical approval, too, make it difficult for them to move quickly enough to set up a study in the critical post-disaster period.
It is to address this that Enhancing Learning and Research for Humanitarian Assistance, with the support of the Wellcome Trust and the UK Department for International Development, recently launched an £8m research programme that investigates these issues.”