The NGO-Academia interface: obstacles to collaboration, lessons from systems thinking and suggested ways forward


Duncan Green at LSE Impact Blog: “The case for partnership between international non-governmental organisations (INGOs) and academia to advance development knowledge is strong. INGOs bring presence on the ground – through their own operations or long-term local partnerships – and communication and advocacy skills (not always academics’ strong point). Academia contributes research skills and credibility, and a long-term reflective perspective that the more frenetic forms of operational work and activism often lack.

In practice, however, such partnerships have proven remarkably difficult, partly because INGOs and academia are too complementary – there is so little overlap between their respective worlds that it is often difficult to find ways to work together.

Obstacles to collaboration

  • Impact vs publication:…
  • Urgency vs “wait and see”: …
  • Status quo vs originality: …
  • Thinking vs talking: ….

Systems thinking approaches

Some of the problems that arise in the academic–INGO interface stem from overly linear approaches to what is, in effect, an ideas and knowledge ecosystem. In such contexts, systems thinking can help identify bottlenecks and suggest possible ways forward.

Getting beyond supply and demand to convening and brokering

Supply-driven is the norm in development research – “experts” churning out policy papers, briefings, books, blogs, etc. Being truly demand-driven is hard even to imagine – an NGO or university department submitting themselves to a public poll on what should be researched? But increasingly in areas such as governance or value chains, we try and move beyond both supply and demand to a convening/brokering role, bringing together different “unusual suspects”. What would that look like in research? Action research, with an agenda that emerges from an interaction between communities and researchers? Natural science seems further ahead on this point: when the Dutch National Research Agenda ran a nationwide citizen survey of research questions they wanted science to look at, 12,000 questions were submitted and clustered into 140 questions, under seven or eight themes. To the organisers’ surprise, many citizens asked quite deep questions.

Most studies identify a need for “knowledge brokers” not only to bridge the gap between the realms of science and policy, but also to synthesise and transform evidence into an effective and usable form for policy and practice. An essential feature of knowledge brokers is that they understand the cultures of both worlds. Often, this role is performed by third-sector organisations of various types (from lobbyists to thinktanks to respected research funders). Some academics can transcend this divide. A few universities employ specialist knowledge brokers but their long-term effectiveness is often constrained by low status, insecure contracts and lack of career pathways. Whoever plays this crucial intermediary role, it appears that it is currently under-resourced within and beyond the university system. In the development sector, the nearest thing to an embedded gateway is the Governance and Social Development Resource Centre (GSDRC), run by Birmingham University and the IDS and largely funded by the Department for International Development. It conducts literature and evidence reviews on a range of topics, drawing evidence from both academic literature and non-academic institutions….

Ways forward

Based on all of the above, a number of ideas emerge for consideration by academics, INGOs and funders of research.

Suggestions for academics

Comments on previous blog posts provided a wealth of practical advice to academics on how to work more productively with INGOs. These include the following:

  • Create research ideas and proposals collaboratively. This means talking to each other early on, rather than academics looking for NGOs to help their dissemination, or NGOs commissioning academics to undertake policy-based evidence making.
  • Don’t just criticise and point to gaps – understand the reasons for them (gaps in both NGO programmes and their research capacity) and propose solutions. Work to recognise practitioners’ strengths and knowledge.
  • Make research relevant to real people in communities. This means proper discussions and dialogue at design, research and analysis stages, disseminated drafts and discussing findings locally on publication.
  • Set up reflection spaces in universities where NGO practitioners can go to take time out and be supported to reflect on and write up their experiences, network with others, and gain new insights on their work.
  • Catalyse more exchange of personnel in both directions. Universities could replicate my own Professor in Practice position at the London School of Economics and Political Science, while INGOs could appoint honorary fellows, who could help guide their thinking in return for access to their work….(More)”.

Measuring results from open contracting in Ukraine


Kathrin Frauscher, Karolis Granickas and Leigh Manasco at the Open Contracting Partnership: “…Ukraine is one of our Showcase and Learning (S&L) projects, and we’ve already shared several stories about the success of Prozorro. Each S&L project tests specific theories of change and use cases. Through the Prozorro platform, Ukraine is revolutionizing procurement by digitizing the process and unlocking data to make it available to citizens, CSOs, government, and business. The theory of change for this S&L project hypothesizes that transparency and the implementation of the Open Contracting Data Standard (OCDS), combined with multi-stakeholder collaboration in the design, promotion and monitoring of the procurement system, is having an impact on value for money, fairness and integrity.

The reform introduced other innovations, including electronic reverse auctions and a centralized procurement database that integrates with private commercial platforms. We co-created a monitoring, evaluation, and learning (MEL) plan with our project partners to quantify and measure specific progress and impact indicators, while understanding that it is hard to attribute impacts to distinct aspects of the reform. The indicators featured in this blog are particularly related to our theory of change.

We are at a crucial moment in this S&L project as our first round of comprehensive MEL baseline and progress data are coming in. It’s a good time to reflect on key takeaways and challenges that arose when defining and analyzing these data, and how we are using them to inform the Prozorro reform.

Openness can result in more competition and competition saves money.

One of the benefits of open contracting appears to be improving market opportunity and efficiency. Market opportunity focuses on companies being able to compete for business on a level playing field.

From January 2015 to March 2017, the average number of bids per tender lot rose by 15%, demonstrating an increase in competition. Even more notable, the average number of unique suppliers during that same time grew by 45% for each procuring entity, meaning that agencies are now procuring from more and more diverse suppliers….

High levels of responsiveness can benefit procuring entities.

Those agencies that leverage their opportunities to interact with business and citizens throughout the contracting cycle, by actively responding to questions and complaints via the online platform, tend to conduct procurement more smoothly, without high levels of amendments or cancellations, than those who don’t. Tenders with a 100% response rate to feedback have a 66% success rate, while those with no response, show a 52% success rate. The portal provides procuring entities with the resources needed to address questions and problems, saving time, effort and money throughout the contracting process.

People are beginning to trust the public procurement process and data more.

According to a survey of 300 entrepreneurs conducted by USAID, most respondents believed that Prozorro significantly (27%) or partially (53%) reduces corruption. Additionally, fewer respondents who participated in procurement said they faced corruption when using the new platform (29%) compared to the old system (54%). These numbers only tell a part of the story, as we do not know what those outside of the procurement system think, but they are a necessary first step towards measuring increased levels of trust for the public procurement process. We will continue looking at trust as one of the proxies for health of an open procurement process.

Citizens are actively seeking out procurement information.

Google search hits grew from 680 in the month of January 2015 to more than 191,000 in the month of February 2017 (tracking 43 related keywords). This means the environment is shifting to one where people are recognizing that this data has value; that there is interest and demand for it. Implementing open contracting processes is just one part of what we want to see happen. We also strive to nurture an environment where open contracting data is seen as something that is worthwhile and necessary.

The newly established www.dozorro.org monitoring platform also shows promising results…..

The main one is feedback loops. We see that procuring entities’ responsiveness to general questions results in better quality procurement. We also see that only one out of three claims (request to a procuring entity to amend, cancel or modify a tender in question) is successfully resolved. In addition, there are some good individual examples, such as the ones in Dnypro and Kiev. While we do not know if these numbers and instances are sufficient for an effective institutional response mechanism, we do know that business and citizens have to trust redress mechanisms before using them. We will continue trying to identify the ideal level of institutional response to secure trust and develop better metrics to capture that….(More)”.

What do we know about when data does/doesn’t influence policy?


Josh Powell at Oxfam Blog: “While development actors are now creating more data than ever, examples of impactful use are anecdotal and scant. Put bluntly, despite this supply-side push for more data, we are far from realizing an evidence-based utopia filled with data-driven decisions.

One of the key shortcomings of our work on development data has been failing to develop realistic models for how data can fit into existing institutional policy/program processes. The political economy – institutional structures, individual (dis)incentives, policy constraints – of data use in government and development agencies remains largely unknown to “data people” like me, who work on creating tools and methods for using development data.

We’ve documented several preconditions for getting data to be used, which could be thought of in a cycle:
Josh Powell 1While broadly helpful, I think we also need more specific theories of change (ToCs) to guide data initiatives in different institutional contexts. Borrowing from a host of theories on systems thinking and adaptive learning, I gave this a try with a simple 2×2 model. The x-axis can be thought of as the level of institutional buy-in, while the y-axis reflects whether available data suggest a (reasonably) “clear” policy approach. Different data strategies are likely to be effective in each of these four quadrants.

So what does this look like in the real world? Let’s tackle these with some examples we’ve come across:

Josh Powell 2.…(More).

With great power comes great responsibility: crowdsourcing raises methodological and ethical questions for academia


Isabell Stamm and Lina Eklund at LSE Impact Blog: “Social scientists are expanding the landscape of academic knowledge production by adopting online crowdsourcing techniques used by businesses to design, innovate, and produce. Researchers employ crowdsourcing for a number of tasks, such as taking pictures, writing text, recording stories, or digesting web-based data (tweets, posts, links, etc.). In an increasingly competitive academic climate, crowdsourcing offers researchers a cutting-edge tool for engaging with the public. Yet this socio-technical practice emerged as a business procedure rather than a research method and thus contains many hidden assumptions about the world which concretely affect the knowledge produced. With this comes a problematic reduction of research participants into a single, faceless crowd. This requires a critical assessment of crowdsourcing’s methodological assumptions….(More)”

AI, machine learning and personal data


Jo Pedder at the Information Commissioner’s Office Blog: “Today sees the publication of the ICO’s updated paper on big data and data protection.

But why now? What’s changed in the two and a half years since we first visited this topic? Well, quite a lot actually:

  • big data is becoming the norm for many organisations, using it to profile people and inform their decision-making processes, whether that’s to determine your car insurance premium or to accept/reject your job application;
  • artificial intelligence (AI) is stepping out of the world of science-fiction and into real life, providing the ‘thinking’ power behind virtual personal assistants and smart cars; and
  • machine learning algorithms are discovering patterns in data that traditional data analysis couldn’t hope to find, helping to detect fraud and diagnose diseases.

The complexity and opacity of these types of processing operations mean that it’s often hard to know what’s going on behind the scenes. This can be problematic when personal data is involved, especially when decisions are made that have significant effects on people’s lives. The combination of these factors has led some to call for new regulation of big data, AI and machine learning, to increase transparency and ensure accountability.

In our view though, whilst the means by which the processing of personal data are changing, the underlying issues remain the same. Are people being treated fairly? Are decisions accurate and free from bias? Is there a legal basis for the processing? These are issues that the ICO has been addressing for many years, through oversight of existing European data protection legislation….(More)”

From Nairobi to Manila, mobile phones are changing the lives of bus riders


Shomik Mehnidrata at Transport for Development Blog: “Every day around the world, millions of people rely on buses to get around. In many cities, these services carry the bulk of urban trips, especially in Africa and Latin America. They are known by many different names—matatus, dalalas, minibus taxis, colectivos, diablos rojos, micros, etc.—but all have one thing in common: they are either hardly regulated… or not regulated at all. Although buses play a critical role in the daily life of many urban dwellers, there are a variety of complaints that have spurred calls for improvement and reform.

However, we are now witnessing a different, more organic kind of change that is disrupting the world of informal buses using ubiquitous cheap sensors and mobile technology. One hotbed of innovation is Nairobi, Kenya’s bustling capital. Two years ago, Nairobi made a splash in the world of urban transport by mapping all the routes of informal matatus. Other countries have sought to replicate this model, with open source tools and crowdsourcing supporting similar efforts in Mexico, Manila, and beyond. Back in Nairobi, the Magic Bus app helps commuters use sms services to reserve and pay for seats in matatus; in September 2016, MagicBus’ potential for easing commuter pain in the Kenyan capital was rewarded with a $1 million prize. Other programs implemented in collaboration with insurers and operators are experimenting with on-board sensors to identify and correct dangerous driver behavior such as sudden braking and acceleration. Ma3Route, also in Nairobi (there is a pattern here!) used crowdsourcing to identify dangerous drivers as well as congestion. At the same time, operators are upping their game: using technology to improve system management, control and routing in La Paz, and working with universities to improve their financial planning and capabilities in Cape Town.

Against this backdrop, the question is then: can these ongoing experimental initiatives offer a coherent alternative to formal reform? …(More)”.

Think tanks can transform into the standard-setters and arbiters of quality of 21st century policy analysis


Marcos Hernando, Diane Stone and Hartwig Pautz in LSE Impact Blog: “Last month, the annual Global GoTo Think Tank Index Report was released, amid claims “think tanks are more important than ever before”. It is unclear whether this was said in spite of, or because of, the emergence of ‘post-truth politics’. Experts have become targets of anger and derision, struggling to communicate facts and advance evidence-based policy. Popular dissatisfaction with ‘policy wonks’ has meant think tanks face challenges to their credibility at a time they are under pressure from increased competition. The 20th century witnessed the rise of the think tank, but the 21st century might yet see its decline. To avoid such a fate, we believe think tanks must reposition themselves as the credible arbiters able to distinguish between poor analysis and good quality research….

In recent years, think tanks have faced three major challenges: financial limits in a world characterised by austerity; increased competition both among think tanks and with other types of policy research organisations; and a growing questioning of, and popular dissatisfaction with, the role of the ‘expert’ itself. Here, we look at each of these in turn..

Nevertheless, think tanks do retain some competitive advantages. The rapid proliferation of knowledge complicates the absorption of information among policymakers. To put it simply, there are limits to the quantity and diversity of knowledge that government actors can make sense of, especially in states hollowed out by austerity programmes and burdened by ever-higher public demands. Managing the over-supply of (occasionally dubious) evidence and policy analysis from research-based NGOs, universities and advocacy groups has become a problem of governance. But this issue also opens a space for the reinvention of think tanks.

With information overload comes a need for talented editors and skilled curators. That is, organisations as much as individuals which help those within policy processes to discern the reliability and usefulness of analytic products. Potentially, think tanks could transform into significant standard-setters and arbiters of quality of 21st century policy analysis. If they do not, they risk becoming just another group in the overpopulated ‘post-truth’ policy advice industry….(More)”

Open-Sourcing Google Earth Enterprise


Geo Developers Blog: “We are excited to announce that we are open-sourcing Google Earth Enterprise (GEE), the enterprise product that allows developers to build and host their own private maps and 3D globes. With this release, GEE Fusion, GEE Server, and GEE Portable Server source code (all 470,000+ lines!) will be published on GitHub under the Apache2 license in March.

Originally launched in 2006, Google Earth Enterprise provides customers the ability to build and host private, on-premise versions of Google Earth and Google Maps. In March 2015, we announced the deprecation of the product and the end of all sales. To provide ample time for customers to transition, we have provided a two year maintenance period ending on March 22, 2017. During this maintenance period, product updates have been regularly shipped and technical support has been available to licensed customers….

GCP is increasingly used as a source for geospatial data. Google’s Earth Engine has made available over a petabyte of raster datasets which are readily accessible and available to the public on Google Cloud Storage. Additionally, Google uses Cloud Storage to provide data to customers who purchase Google Imagerytoday. Having access to massive amounts of geospatial data, on the same platform as your flexible compute and storage, makes generating high quality Google Earth Enterprise Databases and Portables easier and faster than ever.

We will be sharing a series of white papers and other technical resources to make it as frictionless as possible to get open source GEE up and running on Google Cloud Platform. We are excited about the possibilities that open-sourcing enables, and we trust this is good news for our community. We will be sharing more information when we launch the code in March on GitHub. For general product information, visit the Google Earth Enterprise Help Center. Review the essential and advanced training for how to use Google Earth Enterprise, or learn more about the benefits of Google Cloud Platform….(More)”

The science of society: From credible social science to better social policies


Nancy Cartwright and Julian Reiss at LSE Blog: “Society invests a great deal of money in social science research. Surely the expectation is that some of it will be useful not only for understanding ourselves and the societies we live in but also for changing them? This is certainly the hope of the very active evidence-based policy and practice movement, which is heavily endorsed in the UK both by the last Labour Government and by the current Coalition Government. But we still do not know how to use the results of social science in order to improve society. This has to change, and soon.

Last year the UK launched an extensive – and expensive – new What Works Network that, as the Government press release describes, consists of “two existing centres of excellence – the National Institute for Health and Clinical Excellence (NICE) and the Educational Endowment Foundation – plus four new independent institutions responsible for gathering, assessing and sharing the most robust evidence to inform policy and service delivery in tackling crime, promoting active and independent ageing, effective early intervention, and fostering local economic growth”.

This is an exciting and promising initiative. But it faces a series challenge: we remain unable to build real social policies based on the results of social science or to predict reliably what the outcomes of these policies will actually be. This contrasts with our understanding of how to establish the results in the first place.There we have a handle on the problem. We have a reasonable understanding of what kinds of methods are good for establishing what kinds of results and with what (at least rough) degrees of certainty.

There are methods – well thought through – that social scientists learn in the course of their training for constructing a questionnaire, running a randomised controlled trial, conducting an ethnographic study, looking for patterns in large data sets. There is nothing comparably explicit and well thought through about how to use social science knowledge to help predict what will happen when we implement a proposed policy in real, complex situations. Nor is there anything to help us estimate and balance the effectiveness, the evidence, the chances of success, the costs, the benefits, the winners and losers, and the social, moral, political and cultural acceptability of the policy.

To see why this is so difficult think of an analogy: not building social policies but building material technologies. We do not just read off instructions for building a laser – which may ultimately be used to operate on your eyes – from knowledge of basic science. Rather, we piece together a detailed model using heterogeneous knowledge from a mix of physics theories, from various branches of engineering, from experience of how specific materials behave, from the results of trial-and-error, etc. By analogy, building a successful social policy equally requires a mix of heterogeneous kinds of knowledge from radically different sources. Sometimes we are successful at doing this and some experts are very good at it in their own specific areas of expertise. But in both cases – both for material technology and for social technology – there is no well thought through, defensible guidance on how to do it: what are better and worse ways to proceed, what tools and information might be needed, and how to go about getting these. This is true whether we look for general advice that might be helpful across subject areas or advice geared to specific areas or specific kinds of problems. Though we indulge in social technology – indeed we can hardly avoid it – and are convinced that better social science will make for better policies, we do not know how to turn that conviction into a reality.

This presents a real challenge to the hopes for evidence-based policy….(More)”

Can you crowdsource water quality data?


Pratibha Mistry at The Water Blog (Worldbank): “The recently released Contextual Framework for Crowdsourcing Water Quality Data lays out a strategy for citizen engagement in decentralized water quality monitoring, enabled by the “mobile revolution.”

According to the WHO, 1.8 billion people lack access to safe drinking water worldwide. Poor source water quality, non-existent or insufficient treatment, and defects in water distribution systems and storage mean these consumers use water that often doesn’t meet the WHO’s Guidelines for Drinking Water Quality.

The crowdsourcing framework develops a strategy to engage citizens in measuring and learning about the quality of their own drinking water. Through their participation, citizens provide utilities and water supply agencies with cost-effective water quality data in near-real time. Following a typical crowdsourcing model: consumers use their mobile phones to report water quality information to a central service. That service receives the information, then repackages and shares it via mobile phone messages, websites, dashboards, and social media. Individual citizens can thus be educated about their water quality, and water management agencies and other stakeholders can use the data to improve water management; it’s a win-win.

A well-implemented crowdsourcing project both depends on and benefits end users.Source: Figure modified from Hutchings, M., Dev, A., Palaniappan, M., Srinivasan, V., Ramanathan, N., Taylor, J.  2012. “mWASH: Mobile Phone Applications for the Water, Sanitation, and Hygiene Sector.” Pacific Institute, Oakland, California.  114 p.  (Link to full text)

Several groups, from the private sector to academia to non-profits, have taken a recent interest in developing a variety of so-called mWASH apps (mobile phone applications for the water, sanitation, and hygiene WASH sector).  A recent academic study analyzed how mobile phones might facilitate the flow of water quality data between water suppliers and public health agencies in Africa. USAID has invested in piloting a mobile application in Tanzania to help consumers test their water for E. coli….(More)”