Towards Crowd-Scale Deliberation


Paper by Mark Klein: “Let us define deliberation as the activity where groups of people (1) identify possible solutions for a problem, (2) evaluate these alternatives, and (3) select the solution(s) that best meet their needs. Deliberation processes have changed little in centuries. Typically, small groups of powerful players craft policies behind closed doors, and then battle to engage wider support for their preferred options. Most people affected by the decisions have at best limited input into defining the solution options. This approach has become increasingly inadequate as the scale and complexity of the problems we face has increased. Many important ideas and perspectives simply do not get incorporated, squandering the opportunity for far superior outcomes. We have the potential to do much better by radically widening the circle of people involved in complex deliberations, moving from “team” scales (tens of participants) to “crowd” scales (hundreds, thousands, or more).

This is because crowd-scale interactions have been shown to produce, in appropriate circumstances, such powerful emergent phenomena as:

  • The long tail: crowd-scale participation enables access to a much greater diversity of ideas than would otherwise be practical: potentially superior solutions “small voices” (the tail of the frequency distribution) have a chance to be heard .
  • Idea synergy: the ability for users to share their creations in a common forum can enable a synergistic explosion of creativity, since people often develop new ideas by forming novel combinations and extensions of ideas that have been put out by others.
  • Many eyes: crowds can produce remarkably high-quality results (e.g. in open source software) by virtue of the fact that there are multiple independent verifications – many eyes continuously checking the shared content for errors and correcting them .
  • Wisdom of the crowds: large groups of (appropriately independent, motivated and informed) contributors can collectively make better judgments than those produced by the individuals that make them up, often exceeding the performance of experts,because their collective judgment cancels out the biases and gaps of the individual members…

Our team has been developing crowd-scale deliberation support technologies that address these three fundamental challenges by enabling:

  • better ideation: helping crowds develop better solution ideas
  • better evaluation: helping crowds evaluate potential solutions more accurately
  • better decision-making: helping crowds select pareto-optimal solutions…(More)”.

How did awful panel discussions become the default format?


 at The Guardian: “With the occasional exception, my mood in conferences usually swings between boredom, despair and rage. The turgid/self-aggrandizing keynotes and coma-inducing panels, followed by people (usually men) asking ‘questions’ that are really comments, and usually not on topic. The chairs who abdicate responsibility and let all the speakers over-run, so that the only genuinely productive bit of the day (networking at coffee breaks and lunch) gets squeezed. I end up dozing off, or furiously scribbling abuse in my notebook as a form of therapy, and hoping my neighbours can’t see what I’m writing. I probably look a bit unhinged…

This matters both because of the lost opportunity that badly run conferences represent, and because they cost money and time. I hope that if it was easy to fix, people would have done so already, but the fact is that the format is tired and unproductive.

For example, how did something as truly awful as panel discussions become the default format? They end up being a parade of people reading out papers, or they include terrible powerpoints crammed with too many words and illegible graphics. Can we try other formats, like speed dating (eg 10 people pitch their work for 2 minutes each, then each goes to a table and the audience hooks up (intellectually, I mean) with the ones they were interested in); world cafes; simulation games; joint tasks (eg come up with an infographic that explains X)? Anything, really. Yes ‘manels’ (male only panels – take the pledge here) are an outrage, but why not go for complete abolition, rather than mere gender balance?

Conferences frequently discuss evidence and results. So where is the evidence and results for the efficacy of conferences? Given the resources being ploughed into research on development (DFID alone spends about £350m a year), surely it would be a worthwhile investment, if it hasn’t already been done, to sponsor a research programme that runs multiple parallel experiments with different event formats, and compares the results in terms of participant feedback, how much people retain a month after the event etc? At the very least, can they find or commission a systematic review on what the existing evidence says?

Feedback systems could really help. A public eBay-type ratings system to rank speakers/conferences would provide nice examples of good practice for people to draw on (and bad practice to avoid). Or why not go real-time and encourage instant audience feedback? OK, maybe Occupy-style thumbs up from the audience if they like the speaker, thumbs down if they don’t would be a bit in-your-face for academe, but why not introduce a twitterwall to encourage the audience to interact with the speaker (perhaps with moderation to stop people testing the limits, as my LSE students did to Owen Barder last term)?

We need to get better at shaping the format to fit the the precise purpose of the conference. … if the best you can manage is ‘disseminating new research’ of ‘information sharing’, alarm bells should probably ring….(More)”.

Fly on the Facebook Wall: How UNHCR Listened to Refugees on Social Media


 at Social Media for Good: “In “From a Refugee Perspective” UNHCR shows how to conduct meaningful, qualitative social media monitoring in a humanitarian crisis.

From A Refugee PerspectiveBetween March and December 2016 the project team (one project manager, one Pashto and Dari speaker, two native Arabic speakers and an English copy editor) monitored Facebook conversations related to flight and migration in the Afghan and Arabic speaking communities.

To do this, the team created Facebook accounts, joined relevant Facebook groups and summarised their findings in weekly monitoring reports to UNHCR staff and other interested people. I received these reports every week while working as the UNHCR team leader for the Communicating with Communities team in Greece and found them very useful since they gave me insights into what were some of the burning issues that week.

The project did not monitor Twitter because Twitter was not widely used by the communities.

In “From a Refugee Perspective” UNHCR has now summarised their findings from the ten-month project. The main thing I really liked about this project is that UNHCR invested the resources for proper qualitative social media monitoring, as opposed to the purely quantitative analyses that we see so often and which rarely go beyond keyword counting. To complement the social media information, the team held focus group and other discussions with refugees who had arrived in Europe. Among other things, these discussion provided information on how the refugees and migrants are consuming and exchanging information (related: see this BBC Media Action report).

Of course, this type of research is much more resource intensive than what most organisations have in mind when they want to do social media monitoring, but this report shows that additional resources can also result in more meaningful information.

Smuggling prices

Smuggling prices according to monitored Facebook page. Source: From A Refugee Perspective

Monitoring the conversations on Facebook enabled the team to track trends, such as the rise and fall of prices that smugglers asked for different routes (see image). In addition, it provided fascinating insights into how smugglers are selling their services online….(More)”

Citizen Participation: A Critical Look at the Democratic Adequacy of Government Consultations


John Morison at Oxford Journal of Legal Studies: “Consultation procedures are used increasingly in the United Kingdom and elsewhere. This account looks critically at consultation as presently practised, and suggests that consulters and consultees need to do much more to ensure both the participatory validity and democratic value of such exercises. The possibility of a ‘right to be consulted’ is examined. Some ideas from a governmentality perspective are developed, using the growth of localism as an example, to suggest that consultation is often a very structured interaction: the actual operation of participation mechanisms may not always create a space for an equal exchange between official and participant views. Examples of best practice in consultation are examined, before consideration is given to recent case law from the UK seeking to establish basic ground rules for how consultations should be organised. Finally, the promise of consultation to reinvigorate democracy is evaluated and weighed against the correlative risk of ‘participatory disempowerment’…(More)”.

Big Data, Data Science, and Civil Rights


Paper by Solon Barocas, Elizabeth Bradley, Vasant Honavar, and Foster Provost:  “Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness—in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well…(More)”.

Can we predict political uprisings?


 at The Conversation: “Forecasting political unrest is a challenging task, especially in this era of post-truth and opinion polls.

Several studies by economists such as Paul Collier and Anke Hoeffler in 1998 and 2002 describe how economic indicators, such as slow income growth and natural resource dependence, can explain political upheaval. More specifically, low per capita income has been a significant trigger of civil unrest.

Economists James Fearon and David Laitin have also followed this hypothesis, showing how specific factors played an important role in Chad, Sudan and Somalia in outbreaks of political violence.

According to the International Country Risk Guide index, the internal political stability of Sudan fell by 15% in 2014, compared to the previous year. This decrease was after a reduction of its per capita income growth rate from 12% in 2012 to 2% in 2013.

By contrast, when the income per capita growth increased in 1997 compared to 1996, the score for political stability in Sudan increased by more than 100% in 1998. Political stability across any given year seems to be a function of income growth in the previous one.

When economics lie

But as the World Bank admitted, “economic indicators failed to predict Arab Spring”.

Usual economic performance indicators, such as gross domestic product, trade, foreign direct investment, showed higher economic development and globalisation of the Arab Spring countries over a decade. Yet, in 2010, the region witnessed unprecedented uprisings that caused the collapse of regimes such as those in Tunisia, Egypt and Libya.

In our 2016 study we used data for more than 100 countries for the 1984–2012 period. We wanted to look at criteria other than economics to better understand the rise of political upheavals.

We found out and quantified how corruption is a destabilising factor when youth (15-24 years old) exceeds 20% of adult population.

Let’s examine the two main components of the study: demographics and corruption….

We are 90% confident that a youth bulge beyond 20% of adult population, on average, combined with high levels of corruption can significantly destabilise political systems within specific countries when other factors described above also taken into account. We are 99% confident about a youth bulge beyond 30% levels.

Our results can help explain the risk of internal conflict and the possible time window for it happening. They could guide policy makers and international organisations in allocating their anti-corruption budget better, taking into account the demographic structure of societies and the risk of political instability….(More).

Introducing Test+Build – a new tool to help you run your own randomised controlled trial.


Michael Sanders, Miranda Jackman and Martin Sweeney at Behavioural Insights Team: “Work in fraud, error, and debt, and especially tax compliance and collection, has always been a core part of what the Behavioural Insights Team (BIT) does. One of our favourite pieces of work is still that first HMRC trial that told taxpayers with outstanding debts that ‘nine out of ten people pay their tax on time.’ That trial significantly increased the rate at which people paid their taxes, bringing forward £3 million in tax debt. It’s a result that has since been replicated worldwide….

Though all these trials have been in very different contexts and situations, they all employ similar insights and involve running trials to test which letter is most effective. And this got us thinking. Could we build a tool that would enable us to automate lots of the process, while also helping organisations to build their own capabilities? We are pleased to say that the answer is, yes.

 

Our new tool is called Test+Build, and it aims to hugely increase the use of behavioural science in tax collection by helping people design and run their own randomised controlled trials. Test+Build does this by guiding users through the four stages of BIT’s TEST methodology – Target, Explore, Solution and Trial – and provides them with guides, case studies and videos developed by the team that relate to compliance and enforcement. Test+Build also brings in support from BIT researchers to offer advice, conduct randomisations, and analyse and interpret the results. It provides organisations with the tools to run their own trials, and in doing so, increases the organisation’s level of expertise for implementing them in the future.

By letting users work through the process themselves, with support from BIT researchers at key points along the way, we’ve significantly reduced the cost to organisations of running a BIT trial – by about 50 per cent. Of course, the all-important question for us is – as always – does it work?…(More)

Big Mind: How Collective Intelligence Can Change Our World


Book by Geoff Mulgan: “A new field of collective intelligence has emerged in the last few years, prompted by a wave of digital technologies that make it possible for organizations and societies to think at large scale. This “bigger mind”—human and machine capabilities working together—has the potential to solve the great challenges of our time. So why do smart technologies not automatically lead to smart results? Gathering insights from diverse fields, including philosophy, computer science, and biology, Big Mind reveals how collective intelligence can guide corporations, governments, universities, and societies to make the most of human brains and digital technologies.

Geoff Mulgan explores how collective intelligence has to be consciously organized and orchestrated in order to harness its powers. He looks at recent experiments mobilizing millions of people to solve problems, and at groundbreaking technology like Google Maps and Dove satellites. He also considers why organizations full of smart people and machines can make foolish mistakes—from investment banks losing billions to intelligence agencies misjudging geopolitical events—and shows how to avoid them.

Highlighting differences between environments that stimulate intelligence and those that blunt it, Mulgan shows how human and machine intelligence could solve challenges in business, climate change, democracy, and public health. But for that to happen we’ll need radically new professions, institutions, and ways of thinking.

Informed by the latest work on data, web platforms, and artificial intelligence, Big Mind shows how collective intelligence could help us survive and thrive….(More)”

Nobody Is Smarter or Faster Than Everybody


Rod Collins at Huffington Post: “One of the deepest beliefs of command-and-control management is the assumption that the smartest organization is the one with the smartest individuals. This belief is as old as scientific management itself. According to this way of thinking, just as there is a right way to perform every activity, there are right individuals who are essential for defining what are the right things and for making sure that things are done right. Thus, traditional organizations have long held that the key to the successful achievement of the corporation’s two basic accountabilities of strategy and execution is to hire the smartest individual managers and the brightest functional experts.

Command-and-control management assumes that intelligence fundamentally resides in a select number of star performers who are able to leverage their expertise across large groups of people through proper direction and effective control. Thus, the recruiting efforts and the promotional practices of most companies are focused on competing for and retaining the most talented people. While established management thinking holds that most individual workers are replaceable, this is not so for those star performers whose decision-making and problem-solving prowess are heroically revered. Traditional hierarchical organizations firmly believe in the myth of the individual hero. They are convinced that a single highly intelligent individual can make the difference between success and failure, whether that person is a key senior executive, a functional expert, or even a highly paid consultant.

However, in a rapidly changing world, it is becoming painfully obvious to harried executives that no single individual or even an elite cadre of star performers can adequately process the ever-evolving knowledge of fast-changing markets into operational excellence in real-time. Eric Teller, the CEO of Google X, has astutely recognized that we now live in a world where the pace of technological change exceeds the capacity for most individuals to absorb these changes in real time. If we can’t depend upon smart individuals to process change in time to respond to market developments, what options do business leaders have?

Nobody Is Smarter Than Everybody

If business executives want to build smart companies in a rapidly changing world, they will need to think differently and discover the most untapped resource in their organizations: the collective intelligence of their own people. Innovative organizations, such as Wikipedia and Google, have made this discovery and have leveraged the power of collective intelligence into powerful business models that have radically transformed their industries. The struggling online encyclopedia Nupedia rescued itself from oblivion when it serendipitously discovered an obscure application known as a wiki and transformed itself into Wikipedia by using the wiki platform to leverage the power of collective intelligence. In less than a decade, Wikipedia became the world’s most popular general reference resource. Google, which was a late entry into a crowded field of search engine upstarts, quickly garnered two-thirds of the search market by becoming the first engine to use the wisdom of crowds to rank web pages. These successful enterprises have uncovered the essential management wisdom for our times: Nobody is smarter or faster than everybody….

While smart individuals are important in any organization, it isn’t their unique intelligence that is paramount but rather their unique contributions to the overall intelligence of teams. That’s because the blending of the diverse perspectives of different types of intelligences is often the fastest path to the solution of complex problems, as we learned in the summer of 2011 when a diverse group of over 250,000 experts, non-experts, and unusual suspects in a scientific gaming community called Foldit, solved in ten days a biomolecular problem that had alluded the world’s best scientists for over ten years. This means a self-organized group that required no particular credentials for membership was 365 times more effective and efficient than the world’s most credentialed individual experts. Similarly, the non-credentialed contributors of Wikipedia were able to produce approximately 18,000 articles in its first year of operation compared to only 25 articles produced by academic experts in Nupedia’s first year. This means the wisdom of the crowd was 720 times more effective and efficient than the individual experts. These results are completely counterintuitive to everything that most of us have been taught about how intelligence works. However, as counterintuitive as this may seem, the preeminence of collective intelligence has suddenly become a practical reality thanks to proliferation of digital technology over the last two decades.

As we move from the first wave of the digital revolution, which was sparked by connecting people via the Internet, to the second wave where everyone and everything will be hyper-connected in the emerging Internet of Things, our capacity to aggregate and leverage collective intelligence is likely to accelerate as practical applications of artificial intelligence become everyday realities….(More)”.

The Digital Footprint of Europe’s Refugees


Pew Research Center: “Migrants leaving their homes for a new country often carry a smartphone to communicate with family that may have stayed behind and to help search for border crossings, find useful information about their journey or search for details about their destination. The digital footprints left by online searches can provide insight into the movement of migrants as they transit between countries and settle in new locations, according to a new Pew Research Center analysis of refugee flows between the Middle East and Europe.1

Refugees from just two Middle Eastern countries — Syria and Iraq — made up a combined 38% of the record 1.3 million people who arrived and applied for asylum in the European Union, Norway and Switzerland in 2015 and a combined 37% of the 1.2 million first-time asylum applications in 2016. Most Syrian and Iraqi refugees during this period crossed from Turkey to Greece by sea, before continuing on to their final destinations in Europe.

Since many refugees from Syria and Iraq speak Arabic as their native, if not only, language, it is possible to identify key moments in their migration by examining trends in internet searches conducted in Turkey using Arabic, as opposed to the dominant Turkic languages in that country. For example, Turkey-based searches for the word “Greece” in Arabic closely mirror 2015 and 2016 fluctuations in the number of refugees crossing the Aegean Sea to Greece. The searches also provide a window into how migrants planned to move across borders — for example, the search term “Greece” was often combined with “smuggler.” In addition, an hourly analysis of searches in Turkey shows spikes in the search term “Greece” during early morning hours, a typical time for migrants making their way across the Mediterranean.

Comparing online searches with migration data

This report’s analysis compares data from internet searches with government and international agency refugee arrival and asylum application data in Europe from 2015 and 2016. Internet searches were captured from Google Trends, a publicly-available analytical tool that standardizes search volume by language and location over time. The analysis examines searches in Arabic, done in Turkey and Germany, for selected words such as “Greece” or “German” that can be linked to migration patterns. For a complete list of search terms employed, see the methodology. Google releases hourly, daily and weekly search data.

Google does not release the actual number of searches conducted but provides a metric capturing the relative change in searches over a specified time period. The metric ranges from 0 to 100 and indicates low- or high-volume search activity for the time period. Predicting or deciphering human behavior from the analysis of internet searches has limitations and remains experimental. But, internet search data does offer a potentially promising way to explore migration flows crossing international borders.

Migration data cited in this report come from two sources. The first is the United Nations High Commissioner for Refugees (UNHCR), which provides data on new arrivals into Greece on a monthly basis. The second is first-time asylum applications from Eurostat, Europe’s statistical agency. Since both Syrian and Iraqi asylum seekers have had fairly high acceptance rates in Europe, it is likely that most Syrian and Iraqi migrants entering during 2015 and 2016 were counted by UNHCR and applied for asylum with European authorities.

The unique circumstances of this Syrian and Iraqi migration — the technology used by refugees, the large and sudden movement of refugees and language groups in transit and destination countries — presents a unique opportunity to integrate the analysis of online searches and migration data. The conditions that permit this type of analysis may not apply in other circumstances where migrants are moving between countries….(More)”