Towards Crowd-Scale Deliberation


Paper by Mark Klein: “Let us define deliberation as the activity where groups of people (1) identify possible solutions for a problem, (2) evaluate these alternatives, and (3) select the solution(s) that best meet their needs. Deliberation processes have changed little in centuries. Typically, small groups of powerful players craft policies behind closed doors, and then battle to engage wider support for their preferred options. Most people affected by the decisions have at best limited input into defining the solution options. This approach has become increasingly inadequate as the scale and complexity of the problems we face has increased. Many important ideas and perspectives simply do not get incorporated, squandering the opportunity for far superior outcomes. We have the potential to do much better by radically widening the circle of people involved in complex deliberations, moving from “team” scales (tens of participants) to “crowd” scales (hundreds, thousands, or more).

This is because crowd-scale interactions have been shown to produce, in appropriate circumstances, such powerful emergent phenomena as:

  • The long tail: crowd-scale participation enables access to a much greater diversity of ideas than would otherwise be practical: potentially superior solutions “small voices” (the tail of the frequency distribution) have a chance to be heard .
  • Idea synergy: the ability for users to share their creations in a common forum can enable a synergistic explosion of creativity, since people often develop new ideas by forming novel combinations and extensions of ideas that have been put out by others.
  • Many eyes: crowds can produce remarkably high-quality results (e.g. in open source software) by virtue of the fact that there are multiple independent verifications – many eyes continuously checking the shared content for errors and correcting them .
  • Wisdom of the crowds: large groups of (appropriately independent, motivated and informed) contributors can collectively make better judgments than those produced by the individuals that make them up, often exceeding the performance of experts,because their collective judgment cancels out the biases and gaps of the individual members…

Our team has been developing crowd-scale deliberation support technologies that address these three fundamental challenges by enabling:

  • better ideation: helping crowds develop better solution ideas
  • better evaluation: helping crowds evaluate potential solutions more accurately
  • better decision-making: helping crowds select pareto-optimal solutions…(More)”.

Big Data, Data Science, and Civil Rights


Paper by Solon Barocas, Elizabeth Bradley, Vasant Honavar, and Foster Provost:  “Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness—in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well…(More)”.

Deliberative Democracy as Open, Not (Just) Representative Democracy


Paper by Helene Landemore: “Deliberative democracy is at risk of becoming collateral damage of the current crisis of representative democracy. If deliberative democracy is necessarily representative and if representation betrays the true meaning of democracy as rule of, by, and for the people, then how can deliberative democracy retain any validity as a theory of political legitimacy? Any tight connection between deliberative democracy and representative democracy thus risks making deliberative democracy obsolete: a dated paradigm fit for a precrisis order, but maladjusted to the world of Occupy, the Pirate Party, the Zapatistas, and other antirepresentative movements. This essay argues that the problem comes from a particular and historically situated understanding of representative democracy as rule by elected elites. I argue that in order to retain its normative appeal and political relevance, deliberative democracy should dissociate itself from representative democracy thus understood and reinvent itself as the core of a more truly democratic paradigm, which I call “open democracy.” In open democracy, popular rule means the mediated but real exercise of power by ordinary citizens. This new paradigm privileges nonelectoral forms of representation and in it, power is meant to remain constantly inclusive of and accessible–in other words open–to ordinary citizens….(More)”

Crowdsourcing the fight against mosquitos


YahooFinance: “That smartphone in your pocket could hold the cure for malaria, dengue and the Zika virus, a noted Stanford University scientist says.

Manu Prakash has a history of using oddball materials for medical research. His latest project, Abuzz, uses sound. Specifically, he asks regular citizens to capture and record mosquitoes. There are 30 unique species, and each has a different wingbeat pattern.

The big idea is to use algorithms to match sample recordings with disease-carrying species, and then recommend strategies to control the population.

Weird science, sure, but don’t knock it. In this age of massive amounts of compute and abundant sensors, dreamers are doing what should be impossible. They are replicating expensive research tools with inexpensive, makeshift solutions. Solutions that can, in many cases, save lives.

In this case, citizen-scientists capture a mosquito in a plastic bottle, poke a hole in the cap and record the buzzing with their phone. Then they send the digital file off to Prakash and his team.

It’s not the first time the Indian-born professor of bioengineering has made something from almost nothing.

In 2013, he saw a centrifuge being used as a doorstop at a Ugandan clinic. The expensive medical device had been donated by well-meaning researchers. But the village had no electricity.

So, Prakash put on his problem-solving hat. He later developed the Paperfuge.

Inspired by a toy whirligig, the paper-and-string device can separate blood cells from plasma. At a cost of 20 cents, the instrument is perfect for “diagnosis in the field,” Prakash told a TED conference audience.

And that’s just one example of how a little innovation can go a long way, for not a lot of money.

While visiting remote clinics in India and Thailand, he noticed expensive microscopes were collecting dust on shelves. They were too bulky to carry into the field. In 2014, his team showed off Foldscope, an inexpensive, lightweight microscope inspired by origami….(More)”.

Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for


Paper by Lilian Edwards and Michael Veale: “Algorithms, particularly of the machine learning (ML) variety, are increasingly consequential to individuals’ lives but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively presents as a means to “open the black box”, hence allowing individual challenge and redress, as well as possibilities to foster accountability of ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.

However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as “meaningful information about the logic of processing” — is unlikely to be provided by the kind of ML “explanations” computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However (section 5) “subject-centric” explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers’ worries of IP or trade secrets disclosure.

As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy”. However, in our final section, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (“right to be forgotten”) and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds of building a better, more respectful and more user-friendly algorithmic society….(More)”

What drives legitimacy in government?


Discussion paper by the Centre for Public Impact (CPI):  “…What are the sources of legitimacy, and how can legitimacy be strengthened? By legitimacy we mean the reservoir of support that that allows governments to deliver positive outcomes for people, what we at the CPI call public impact. We are interested in exploring how governments can build constructive relationships with their citizens for the benefit of each of us and society as a whole. This might be at a wholeof-government level or the level of an individual service or policy. We will explore how legitimacy flows between these levels, and how legitimacy can be built both from the top down and from the bottom up.

Occasionally legitimacy is discussed as a black or white concept – with governments labelled as either “legitimate” (meaning rightfully in power) or “illegitimate” (meaning not rightfully in power). We will try to avoid getting mired down in such discussions. The concept of legitimacy is also often used as shorthand for other concepts such as “a democratic mandate”, “fitness to serve” or “honesty”. Our project aims to determine what legitimacy means in reality to people and governments in different parts of the world, providing some shades of grey as well greater rigour and clarity. There will be no easy answers. Building legitimacy requires action in many parts of a complex system, involving multiple institutions and actors. And the sources of legitimacy in one country or in one policy area may not be easily translatable to other countries or other policy areas….(More)”.

How Data Mining Facebook Messages Can Reveal Substance Abusers


Emerging Technology from the arXiv: “…Substance abuse is a serious concern. Around one in 10 Americans are sufferers. Which is why it costs the American economy more than $700 billion a year in lost productivity, crime, and health-care costs. So a better way to identify people suffering from the disorder, and those at risk of succumbing to it, would be hugely useful.

Bickel and co say they have developed just such a technique, which allows them to spot sufferers simply by looking at their social media messages such as Facebook posts. The technique even provides new insights into the way abuse of different substances influences people’s social media messages.

The new technique comes from the analysis of data collected between 2007 and 2012 as part of a project that ran on Facebook called myPersonality. Users who signed up were offered various psychometric tests and given feedback on their scores. Many also agreed to allow the data to be used for research purposes.

One of these tests asked over 13,000 users with an average age of 23 about the substances they used. In particular, it asked how often they used tobacco, alcohol, or other drugs, and assessed each participant’s level of use. The users were then divided into groups according to their level of substance abuse.

This data set is important because it acts as a kind of ground truth, recording the exact level of substance use for each person.

The team next gathered two other Facebook-related data sets. The first was 22 million status updates posted by more than 150,000 Facebook users. The other was even larger: the “like” data associated with 11 million Facebook users.

Finally, the team worked out how these data sets overlapped. They found almost 1,000 users who were in all the data sets, just over 1,000 who were in the substance abuse and status update data sets, and 3,500 who were in the substance abuse and likes data sets.

These users with overlapping data sets provide rich pickings for data miners. If people with substance use disorders have certain unique patterns of behavior, it may be possible to spot these in their Facebook status updates or in their patterns of likes.

So Bickel and co got to work first by text mining most of the Facebook status updates and then data mining most of the likes data set. Any patterns they found, they then tested by looking for people with similar patterns in the remaining data and seeing if they also had the same level of substance use.

The results make for interesting reading. The team says its technique was hugely successful. “Our best models achieved 86%  for predicting tobacco use, 81% for alcohol use and 84% for drug use, all of which significantly outperformed existing methods,” say Bickel and co…. (More) (Full Paper: arxiv.org/abs/1705.05633: Social Media-based Substance Use Prediction).

More professionalism, less populism: How voting makes us stupid, and what to do about it


Paper by Benjamin Wittes and Jonathan Rauch: “For several generations, political reform and rhetoric have been entirely one-directional: always more direct democracy, never less. The general belief holds that more public involvement will produce more representative and thus more effective and legitimate governance. But does increasing popular involvement in politics remedy the ills of our government culture; is it the chicken soup of political reforms?

In a new report, “More professionalism, less populism: How voting makes us stupid, and what to do about it,” Brookings Senior Fellows Jonathan Rauch and Benjamin Wittes argue that the best way forward is to rebalance the reform agenda away from direct participation and toward intermediation and institutions. As the authors write, “Neither theory nor practice supports the idea that more participation will produce better policy outcomes, or will improve the public’s approbation of government, or is even attainable in an environment dominated by extreme partisans and narrow interest groups.”

Populism cannot solve our problems, Rauch and Wittes claim, because its core premises and reforms are self-defeating. Research has shown that voters are “irrationally biased and rationally ignorant,” and do not possess the specialized knowledge necessary to make complex policy judgments. Further, elections provide little by way of substantive guidance for policymakers and, even on its own terms, direct democracy is often unrepresentative. In the words of the authors, “By itself, building more direct input from the public into the functions of government is likely to lead to more fragmentation, more stalemate, more flawed policies—and, paradoxically, less effective representation.”

The authors are not advocating complacency about voter participation, much less for restricting or limiting voting: “We are arguing that participation is not enough, and that overinvesting in it neglects other, more promising paths.”

To truly repair American democracy, Rauch and Wittes endorse a resurgence of political institutions, such as political parties, and substantive professionals, such as career politicians and experts. Drawing on examples like the intelligence oversight community, the authors assert that these intermediaries actually make democracy more inclusive and more representative than direct participation can do by itself. “In complex policy spaces,” the authors write, “properly designed intermediary institutions can act more decisively and responsively on behalf of the public than an army of ‘the people’ could do on its own behalf, [and are] less likely to be paralyzed by factional disputes and distorted by special-interest manipulation.”…(More) (Read the full paper here).

The Way Ahead


Transcript of lecture delivered by Stephen Fry on the 28th May  2017 • Hay Festival, Hay-on-Wye: “Peter Florence, the supremo of this great literary festival, asked me some months ago if I might, as part of Hay’s celebration of the five hundredth anniversary of Martin Luther’s kickstarting of the reformation, suggest a reform of the internet…

You will be relieved to know, that unlike Martin Luther, I do not have a full 95 theses to nail to the door, or in Hay’s case, to the tent flap. It might be worth reminding ourselves perhaps, however, of the great excitements of the early 16th century. I do not think it is a coincidence that Luther grew up as one of the very first generation to have access to printed books, much as some of you may have children who were the first to grow up with access to e-books, to iPads and to the internet….

The next big step for AI is the inevitable achievement of Artificial General Intelligence, or AGI, sometimes called ‘full artificial intelligence’ the point at which machines really do think like humans. In 2013, hundreds of experts were asked when they thought AGI may arise and the median prediction was they year 2040. After that the probability, most would say certain, is artificial super-intelligence and the possibility of reaching what is called the Technological Singularity – what computer pioneer John van Neumann described as the point “…beyond which humans affairs, as we know them, could not continue.” I don’t think I have to worry about that. Plenty of you in this tent have cause to, and your children beyond question will certainly know all about it. Unless of course the climate causes such havoc that we reach a Meteorological Singularity. Or the nuclear codes are penetrated by a self-teaching algorithm whose only purpose is to find a way to launch…

It’s clear that, while it is hard to calculate the cascade upon cascade of new developments and their positive effects, we already know the dire consequences and frightening scenarios that threaten to engulf us. We know them because science fiction writers and dystopians in all media have got there before us and laid the nightmare visions out. Their imaginations have seen it all coming. So whether you believe Ray Bradbury, George Orwell, Aldous Huxley, Isaac Asimov, Margaret Atwood, Ridley Scott, Anthony Burgess, H. G. Wells, Stanley Kubrick, Kazuo Ishiguro, Philip K. Dick, William Gibson, John Wyndham, James Cameron, the Wachowski’s or the scores and scores of other authors and film-makers who have painted scenarios of chaos and doom, you can certainly believe that a great transformation of human society is under way, greater than Gutenberg’s revolution – greater I would submit than the Industrial Revolution (though clearly dependent on it) – the greatest change to our ways of living since we moved from hunting and gathering to settling down in farms, villages and seaports and started to trade and form civilisations. Whether it will alter the behaviour, cognition and identity of the individual in the same way it is certain to alter the behaviour, cognition and identity of the group, well that is a hard question to answer.

But believe me when I say that it is happening. To be frank it has happened. The unimaginably colossal sums of money that have flowed to the first two generations of Silicon Valley pioneers have filled their coffers, their war chests, and they are all investing in autonomous cars, biotech, the IoT, robotics Artificial Intelligence and their convergence. None more so than the outlier, the front-runner Mr Elon Musk whose neural link system is well worth your reading about online on the great waitbutwhy.com website. Its author Tim Urban is a paid consultant of Elon Musk’s so he has the advantage of knowing what he is writing about but the potential disadvantage of being parti pri and lacking in objectivity. Elon Musk made enough money from his part in the founding and running of PayPal to fund his manifold exploits. The Neuralink project joins his Tesla automobile company and subsidiary battery and solar power businesses, his Space X reusable spacecraft group, his OpenAI initiative and Hyperloop transport system. The 1950s and 60s Space Race was funded by sovereign governments, this race is funded by private equity, by the original investors in Google, Apple, Facebook and so on. Nation states and their agencies are not major players in this game, least of all poor old Britain. Even if our politicians were across this issue, and they absolutely are not, our votes would still be an irrelevance….

So one thesis I would have to nail up to the tent is to clamour for government to bring all this deeper into schools and colleges. The subject of the next technological wave, I mean, not pornography and prostitution. Get people working at the leading edge of AI and robotics to come into the classrooms. But more importantly listen to them – even if what they say is unpalatable, our masters must have the intellectual courage and honesty to say if they don’t understand and ask for repetition and clarification. This time, in other words, we mustn’t let the wave engulf us, we must ride its crest. It’s not quite too late to re-gear governmental and educational planning and thinking….

The witlessness of our leaders and of ourselves is indeed a problem. The real danger surely is not technology but technophobic Canute-ism, a belief that we can control, change or stem the technological tide instead of understanding that we need to learn how to harness it. Driving cars is dangerous, but we developed driving lesson requirements, traffic controls, seat-belts, maintenance protocols, proximity sensors, emission standards – all kinds of ways of mitigating the danger so as not to deny ourselves the life-changing benefits of motoring.

We understand why angry Ned Ludd destroyed the weaving machines that were threatening his occupation (Luddites were prophetic in their way, it was weaving machines that first used the punched cards on which computers relied right up to the 1970s). We understand too why French workers took their clogs, their sabots as they were called, and threw them into the machinery to jam it up, giving us the word sabotage. But we know that they were in the end, if you’ll pardon the phrase, pissing into the wind. No technology has ever been stopped.

So what is the thesis I am nailing up? Well, there is no authority for me to protest to, no equivalent of Pope Leo X for it to be delivered to, and I am certainly no Martin Luther. The only thesis I can think worth nailing up is absurdly simple. It is a cry as much from the heart as from the head and it is just one word – Prepare. We have an advantage over our hunter gatherer and farming ancestors, for whether it is Winter that is coming, or a new Spring, is entirely in our hands, so long as we prepare….(More)”.

ControCurator: Understanding Controversy Using Collective Intelligence


Paper by Benjamin Timmermans et al: “There are many issues in the world that people do not agree on, such as Global Warming [Cook et al. 2013], Anti-Vaccination [Kata 2010] and Gun Control [Spitzer 2015]. Having opposing opinions on such topics can lead to heated discussions, making them appear controversial. Such opinions are often expressed through news articles and social media. There are increasing calls for methods to detect and monitor these online discussions on different topics. Existing methods focus on using sentiment analysis and Wikipedia for identifying controversy [Dori-Hacohen and Allan 2015]. The problem with this is that it relies on a well structured and existing debate, which may not always be the case. Take for instance news reporting during large disasters, in which case the structure of a discussion is not yet clear and may change rapidly. Adding to this is that there is currently no agreed upon definition as to what exactly defines controversy. It is only agreed that controversy arises when there is a large debate by people with opposing viewpoints, but we do not yet understand which are the characteristic aspects and how they can be measured. In this paper we use the collective intelligence of the crowd in order to gain a better understanding of controversy by evaluating the aspects that have impact on it….(More)”

See also http://crowdtruth.org/