When is the crowd wise or can the people ever be trusted?


Julie Simon at NESTA: “Democratic theory has tended to take a pretty dim view of people and their ability to make decisions. Many political philosophers believe that people are at best uninformed and at worst, ignorant and incompetent.  This view is a common justification for our system of representative democracy – people can’t be trusted to make decisions so this responsibility should fall to those who have the expertise, knowledge or intelligence to do so.

Think back to what Edmund Burke said on the subject in his speech to the Electors of Bristol in 1774, “Your representative owes you, not his industry only, but his judgement; and he betrays, instead of serving you, if he sacrifices it to your opinion.” He reminds us that “government and legislation are matters of reason and judgement, and not of inclination”. Others, like the journalist Charles Mackay, whose book on economic bubbles and crashes,Extraordinary Popular Delusions and the Madness of Crowds, had an even more damning view of the crowd’s capacity to exercise either judgement or reason.

The thing is, if you believe that ‘the crowd’ isn’t wise then there isn’t much point in encouraging participation – these sorts of activities can only ever be tokenistic or a way of legitimising the decisions taken by others.

There are then those political philosophers who effectively argue that citizens’ incompetence doesn’t matter. They argue that the aggregation of views – through voting – eliminates ‘noise’ which enables you to arrive at optimal decisions. The larger the group, the better its decisions will be.  The corollary of this view is that political decision making should involve mass participation and regular referenda – something akin to the Swiss model.

Another standpoint is to say that there is wisdom within crowds – it’s just that it’s domain specific, unevenly distributed and quite hard to transfer. This idea was put forward by Friedrich Hayek in his seminal 1945 essay on The Use of Knowledge in Society in which he argues that:

“…the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form, but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is thus not merely a problem of how to allocate ‘given’ resources……it is a problem of the utilization of knowledge not given to anyone in its totality”.

Hayek argued that it was for this reason that central planning couldn’t work since no central planner could ever aggregate all the knowledge distributed across society to make good decisions.

More recently, Eric Von Hippel built on these foundations by introducing the concept of information stickiness; information is ‘sticky’ if it is costly to move from one place to another. One type of information that is frequently ‘sticky’ is information about users’ needs and preferences.[1] This helps to account for why manufacturers tend to develop innovations which are incremental – meeting already identified needs – and why so many organisations are engaging users in their innovation processes:  if knowledge about needs and tools for developing new solutions can be co-located in the same place (i.e. the user) then the cost of transferring sticky information is eliminated…..

There is growing evidence on how crowdsourcing can be used by governments to solve clearly defined technical, scientific or informational problems. Evidently there are significant needs and opportunities for governments to better engage citizens to solve these types of problems.

There’s also a growing body of evidence on how digital tools can be used to support and promote collective intelligence….

So, the critical task for public officials is to have greater clarity over the purpose of engagement –  in order to better understand which methods of engagement should be used and what kinds of  groups should be targeted.

At the same time, the central question for researchers is when and how to tap into collective intelligence: what tools and approaches can be used when we’re looking at arenas which are often sites of contestation? Should this input be limited to providing information and expertise to be used by public officials or representatives, or should these distributed experts exercise some decision making power too? And when we’re dealing with value based judgements when should we rely on large scale voting as a mechanism for making ‘smarter’ decisions and when are deliberative forms of engagement more appropriate? These are all issues we’re exploring as part of our ongoing programme of work on democratic innovations….(More)”

Resource Library for Cross-Sector Collaboration


The Intersector Project: “Whether you’re working on a local collective impact initiative or a national public-private partnership; whether you’re a practitioner or a researcher; whether you’re looking for basics or a detailed look at a particular topic, our Resource Library can help you find the information and tools you need for your cross-sector thinking and practice. The Library — which includes resources from research organizations, advisory groups, training organizations, academic centers and journals, and more — spans issue areas, sectors, and partnership types….(More)”

Collective intelligence and international development


Gina Lucarelli, Tom Saunders and Eddie Copeland at Nesta: “The mountain kingdom of Lesotho, a small landlocked country in Sub-Saharan Africa, is an unlikely place to look for healthcare innovation. Yet in 2016, it became the first country in Africa to deploy the test and treat strategy for treating people with HIV. Rather than waiting for white blood cell counts to drop, patients begin treatment as soon as they are diagnosed. This strategy is backed by the WHO as it has the potential to increase the number of people who are able to access treatment, consequently reducing transmisssion and keeping people with HIV healthy and alive for longer.

While lots of good work is underway in Lesotho, and billions have been spent on HIV programmes in the country, the percentage of the population infected with HIV has remained steady and is now almost 23%. Challenges of this scale need new ideas and better ways to adopt them.

On a recent trip to Lesotho as part of a project with the United Nations Development Group, we met various UN agencies, the World Bank, government leaders, civil society actors and local businesses, to learn about the key development issues in Lesotho and to discuss the role that ‘collective intelligence’ might play in creating better country development plans. The key question Nesta and the UN are working on is: how can we increase the impact of the UN’s work by tapping into the ideas, information and possible solutions which are distributed among many partners, the private sector, and the 2 million people of Lesotho?

…our framework of collective intelligence, a set of iterative stages which can help organisations like the UN tap into the ideas, information and possible solutions of groups and individuals which are not normally involved included in the problem solving process. For each stage, we also presented a number of examples of how this works in practice.

Collective intelligence framework – stages and examples

  1. Better understanding the facts, data and experiences: New tools, from smartphones to online communities enable researchers, practitioners and policymakers to collect much larger amounts of data much more quickly. Organisations can use this data to target their resources at the most critical issues as well as feed into the development of products and services that more accurately meet the needs of citizens. Examples include mPower, a clinical study which used an app to collect data about people with Parkinsons disease via surveys and smartphone sensors.

  2. Better development of options and ideas: Beyond data collection, organisations can use digital tools to tap into the collective brainpower of citizens to come up with better ideas and options for action. Examples include participatory budgeting platforms like “Madame Mayor, I have an idea” and challenge prizes, such as USAID’s Ebola grand challenge.

  3. Better, more inclusive decision making: Decision making and problem solving are usually left to experts, yet citizens are often best placed to make the decisions that will affect them. New digital tools make it easier than ever for governments to involve citizens in policymaking, planning and budgeting. Our D-CENT tools enable citizen involvement in decision making in a number of fields. Another example is the Open Medicine Project, which designs digital tools for healthcare in consultation with both practitioners and patients.

  4. Better oversight and improvement of what is done: From monitoring corruption to scrutinising budgets, a number of tools allow broad involvement in the oversight of public sector activity, potentially increasing accountability and transparency. The Family and Friends Test is a tool that allows NHS users in the UK to submit feedback on services they have experienced. So far, 25 million pieces of feedback have been submitted. This feedback can be used to stimulate local improvement and empower staff to carry out changes… (More)”

Coming soon: The Conversation Global


Screen Shot 2016-09-22 at 8.54.58 AMThe Conversation, an independent news and commentary website produced by academics and journalists, launches its Global edition this month.

The Conversation Global will publish commentary, analysis and research from the academic community worldwide. We will engage scholars from across the world, featuring perspectives from the Global South and North on the most pressing international issues. All content will be published under Creative Commons.

The site is open and free for everyone to read.

Trust in Government


First issue of the Government Oxford Review focusing on trust (or lack of trust) in government:

“In 2016, governments are in the firing line. Their populations suspect them of accelerating globalisation for the benefit of the few, letting trade drive away jobs, and encouraging immigration so as to provide cheaper labour and to fill skills-gaps without having to invest in training. As a result the ‘anti-government’, ‘anti-expert’, ‘anti-immigration’ movements are rapidly gathering support. The Brexit campaign in the United Kingdom, the Presidential run of Donald Trump in the United States, and the Five Star movement in Italy are but three examples.” Dean Ngaire Woods

Our contributors have shed an interesting, and innovative, light on this issue. McKinsey’s Andrew Grant and Bjarne Corydon discuss the importance of transparency and accountability of government, while Elizabeth Linos, from the Behavioural Insights Team in North America, and Princeton’s Eldar Shafir discuss how behavioural science can be utilised to implement better policy, and Geoff Mulgan, CEO at Nesta, provides insights into how harnessing technology can bring about increased collective intelligence.

The Conference Addendum features panel summaries from the 2016 Challenges of Government Conference, written by our MPP and DPhil in Public Policy students.

Exploring Online Engagement in Public Policy Consultation: The Crowd or the Few?


Helen K. Liu in Australian Journal of Public Administration: “Governments are increasingly adopting online platforms to engage the public and allow a broad and diverse group of citizens to participate in the planning of government policies. To understand the role of crowds in the online public policy process, we analyse participant contributions over time in two crowd-based policy processes, the Future Melbourne wiki and the Open Government Dialogue. Although past evaluations have shown the significance of public consultations by expanding the engaged population within a short period of time, our empirical case studies suggest that a small number of participants contribute a disproportionate share of ideas and opinions. We discuss the implications of our initial examination for the future design of engagement platforms….(More)”

Scholarpedia


About: “Scholarpedia is a peer-reviewed open-access encyclopedia written and maintained by scholarly experts from around the world. Scholarpediais inspired by Wikipedia and aims to complement it by providing in-depth scholarly treatments of academic topics.

Scholarpedia and Wikipedia are alike in many respects:

  • both allow anyone to propose revisions to almost any article
  • both are “wikis” and use the familiar MediaWiki software designed for Wikipedia
  • both allow considerable freedom within each article’s “Talk” pages
  • both are committed to the goal of making the world’s knowledge freely available to all

Nonetheless, Scholarpedia is best understood by how it is unlike most wikis, differences arising from Scholarpedia’s academic origins, goals, and audience. The most significant isScholarpedia’s process of peer-reviewed publication: all articles in Scholarpedia are either in the process of being written by a team of authors, or have already been published and are subject to expert curation….(More)”

Teenage scientists enlisted to fight Zika


ShareAmerica: “A mosquito’s a mosquito, right? Not when it comes to Zika and other mosquito-borne diseases.

Only two of the estimated 3,000 species of mosquitoes are capable of carrying the Zika virus in the United States, but estimates of their precise range remain hazy, according to the U.S. Centers for Disease Control and Prevention.

Scientists could start getting better information about these pesky, but important, insects with the help of plastic cups, brown paper towels and teenage biology students.

As part of the Invasive Mosquito Project from the U.S. Department of Agriculture, secondary-school students nationwide are learning about mosquito populations and helping fill the knowledge gaps.

Simple experiment, complex problem

The experiment works like this: First, students line the cups with paper, then fill two-thirds of the cups with water. Students place the plastic cups outside, and after a week, the paper is dotted with what looks like specks of dirt. These dirt particles are actually mosquito eggs, which the students can identify and classify.

Students then upload their findings to a national crowdsourced database. Crowdsourcing uses the collective intelligence of online communities to “distribute” problem solving across a massive network.

Entomologist Lee Cohnstaedt of the U.S. Department of Agriculture coordinates the program, and he’s already thinking about expansion. He said he hopes to have one-fifth of U.S. schools participate in the mosquito species census. He also plans to adapt lesson plans for middle schools, Scouting troops and gardening clubs.

Already, crowdsourcing has “collected better data than we could have working alone,” he told the Associated Press….

In addition to mosquito tracking, crowdsourcing has been used to develop innovative responses to a number of complex challenges, from climate change to archaeologyto protein modeling….(More)”

Insights On Collective Problem-Solving: Complexity, Categorization And Lessons From Academia


Part 3 of an interview series by Henry Farrell for the MacArthur Research Network on Opening Governance: “…Complexity theorists have devoted enormous energy and attention to thinking about how complex problems, in which different factors interact in ways that are hard to predict, can best be solved. One key challenge is categorizing problems, so as to understand which approaches are best suited to addressing them.

Scott Page is the Leonid Hurwicz Collegiate Professor of Complex Systems at the University of Michigan, Ann Arbor, and one of the world’s foremost experts on diversity and problem-solving. I asked him a series of questions about how we might use insights from academic research to think better about how problem solving works.

Henry: One of the key issues of collective problem-solving is what you call the ‘problem of problems’ – the question of identifying which problems we need to solve. This is often politically controversial – e.g., it may be hard to get agreement that global warming, or inequality, or long prison sentences are a problem. How do we best go about identifying problems, given that people may disagree?

Scott: In a recent big think paper on the potential of diversity for collective problem solving in Scientific American, Katherine Phillips writes that group members must feel validated, that they must share a commitment to the group, and they must have a common goal if they are going to contribute. This implies that you won’t succeed in getting people to collaborate by setting an agenda from on high and then seeking to attract diverse people to further that agenda.

One way of starting to tackle the problem of problems is to steal a rule of thumb from Getting to Yes, by getting to think people about their broad interests rather than the position that they’re starting from. People often agree on their fundamental desires but disagree on how they can be achieved. For example, nearly everyone wants less crime, but they may disagree over whether they think the solution to crime involves tackling poverty or imposing longer prison sentences. If you can get them to focus on their common interest in solving crime rather than their disagreements, you’re more likely to get them to collaborate usefully.

Segregation amplifies the problem of problems. We live in towns and neighborhoods segregated by race, income, ideology, and human capital. Democrats live near Democrats and Republicans near Republicans. Consensus requires integration. We must work across ideologies. Relatedly, opportunity requires more than access. Many people grow up not knowing any engineers, dentists, doctors, lawyers, and statisticians. This isolation narrows the set of careers they consider and it reduces the diversity of many professions. We cannot imagine lives we do not know.

Henry: Once you get past the problem of problems, you still need to identify which kind of problem you are dealing with. You identify three standard types of problems: solution problems, selection problems and optimization problems. What – very briefly – are the key differences between these kinds of problems?

Scott: I’m constantly pondering the potential set of categories in which collective intelligence can emerge. I’m teaching a course on collective intelligence this semester and the undergraduates and I developed an acronym SCARCE PIGS to describe the different types of domains. Here’s the brief summary:

  • Predict: when individuals combine information, models, or measurements to estimate a future event, guess an answer, or classify an event. Examples might involve betting markets, or combined efforts to guess a quantity, such as Francis Galton’s example of people at a fair trying to guess the weight of a steer.
  • Identify: when individuals have local, partial, or possibly erroneous knowledge and collectively can find an object. Here, an example is DARPA’s Red Balloon project.
  • Solve: when individuals apply and possibly combine higher order cognitive processes and analytic tools for the purpose of finding or improving a solution to a task. Innocentive and similar organizations provide examples of this.
  • Generate: when individuals apply diverse representations, heuristics, and knowledge to produce something new. An everyday example is creating a new building.
  • Coordinate: when individuals adopt similar actions, behaviors, beliefs, or mental frameworks by learning through local interactions. Ordinary social conventions such as people greeting each other are good examples.
  • Cooperate: when individuals take actions, not necessarily in their self interest, that collectively produce a desirable outcome. Here, think of managing common pool resources (e.g. fishing boats not overfishing an area that they collectively control).
  • Arrange: when individuals manipulate items in a physical or virtual environment for their own purposes resulting in an organization of that environment. As an example, imagine a student co-op which keeps twenty types of hot sauce in its pantry. If each student puts whichever hot sauce she uses in the front of the pantry, then on average, the hot sauces will be arranged according to popularity, with the most favored hot sauces in the front and the least favored lost in the back.
  • Respond: when individuals react to external or internal stimuli creating collective responses that maintains system level functioning. For example, when yellow jackets attack a predator to maintain the colony, they are displaying this kind of problem solving.
  • Emerge: when individual parts create a whole that has categorically distinct and new functionalities. The most obvious example of this is the human brain….(More)”

Can An Online Game Help Create A Better Test For TB?


Esther Landhuis at NPR: “Though it’s the world’s top infectious killer, tuberculosis is surprisingly tricky to diagnose. Scientists think that video gamers can help them create a better diagnostic test.

An online puzzle released Monday will see whether the researchers are right. Players of a Web-based game called EteRNA will try to design a sensor molecule that could potentially make diagnosing TB as easy as taking a home pregnancy test. The TB puzzle marks the launch of “EteRNA Medicine.”

The idea of rallying gamers to fight TB arose as two young Stanford University professors chatted over dinner at a conference last May. Rhiju Das, a biochemist who helped create EteRNA, told bioinformatician Purvesh Khatri about the game, which challenges nonexperts to design RNA molecules that fold into target shapes.

RNA molecules play key roles in biology and disease. Some brain disorders can be traced to problems with RNA folding. Viruses such as H1N1 flu and HIV depend on RNA elements to replicate and infect cells.

Das wants to “fight fire with fire” — that is, to disrupt the RNA involved in a disease or virus by crafting new tools that are themselves made of RNA molecules. EteRNA players learn RNA design principles with each puzzle they solve.

Khatri was intrigued by the notion of engaging the public to solve problems. His lab develops novel diagnostics using publicly available data sets. The team had just published a paper on a set of genes that could help diagnose sepsis and had other papers under review on influenza and TB.

In an “Aha!” moment during their dinner chat, Khatri says, he and Das realized “how awesome it would be to sequentially merge our two approaches — to use public data to find a diagnostic marker for a disease, and then use the public’s help to develop the test.”

TB seemed opportune as it has a simple diagnostic signature — a set of three human genes that turn up or down predictably after TB infection. When checked across gene data on thousands of blood samples from 14 groups of people around the globe, the behavior of the three-gene set readily identified people with active TB, distinguishing them from individuals who had latent TB or other diseases.

Those findings, published in February, have gotten serious attention — not only from curious patients and doctors but also from humanitarian groups eager to help bring a better TB test to market. It can currently take several tests to tell whether a person has active TB, including a chest X-ray and sputum test. The Bill & Melinda Gates Foundation has started sending data to help the Stanford team validate a test based on the newly identified TB gene signature, says study leader Khatri, who works at the university’s Center for Biomedical Informatics Research….(More)”