Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland


Tanja Aitamurto and Hélène Landemore in Policy & Internet: “This article examines the emergence of democratic deliberation in a crowdsourced law reform process. The empirical context of the study is a crowdsourced legislative reform in Finland, initiated by the Finnish government. The findings suggest that online exchanges in the crowdsourced process qualify as democratic deliberation according to the classical definition. We introduce the term “crowdsourced deliberation” to mean an open, asynchronous, depersonalized, and distributed kind of online deliberation occurring among self-selected participants in the context of an attempt by government or another organization to open up the policymaking or lawmaking process. The article helps to characterize the nature of crowdsourced policymaking and to understand its possibilities as a practice for implementing open government principles. We aim to make a contribution to the literature on crowdsourcing in policymaking, participatory and deliberative democracy and, specifically, the newly emerging subfield in deliberative democracy that focuses on “deliberative systems.”…(More)”

Four Steps to Enhanced Crowdsourcing


Kendra L. Smith and Lindsey Collins at Planetizen: “Over the past decade, crowdsourcing has grown to significance through crowdfunding, crowd collaboration, crowd voting, and crowd labor. The idea behind crowdsourcing is simple: decentralize decision-making by utilizing large groups of people to assist with solving problems, generating ideas, funding, generating data, and making decisions. We have seen crowdsourcing used in both the private and public sectors. In a previous article, “Empowered Design, By ‘the Crowd,'” we discuss the significant role crowdsourcing can play in urban planning through citizen engagement.

Crowdsourcing in the public sector represents a more inclusive form of governance that incorporates a multi-stakeholder approach; it goes beyond regular forms of community engagement and allows citizens to participate in decision-making. When citizens help inform decision-making, new opportunities are created for cities—opportunities that are beginning to unfold for planners. However, despite its obvious utility, planners underutilize crowdsourcing. A key reason for its underuse can be attributed to a lack of credibility and accountability in crowdsourcing endeavors.

Crowdsourcing credibility speaks to the capacity to trust a source and discern whether information is, indeed, true. While it can be difficult to know if any information is definitively true, indicators of fact or truth include where information was collected, how information was collected, and how rigorously it was fact-checking or peer reviewed. However, in the digital universe of today, individuals can make a habit of posting inaccurate, salacious, malicious, and flat-out false information. The realities of contemporary media make it more difficult to trust crowdsourced information for decision-making, especially for the public sector, where the use of inaccurate information can impact the lives of many and the trajectory of a city. As a result, there is a need to establish accountability measures to enhance crowdsourcing in urban planning.

Establishing Accountability Measures

For urban planners considering crowdsourcing, establishing a system of accountability measures might seem like more effort than it is worth. However, that is simply not true. Recent evidence has proven traditional community engagement (e.g., town halls, forums, city council meetings) is lower than ever. Current engagement also tends to focus on problems in the community rather than the development of the community. Crowdsourcing offers new opportunities for ongoing and sustainable engagement with the community. It can be simple as well.

The following four methods can be used separately or together (we hope they are used together) to help establish accountability and credibility in the crowdsourcing process:

  1. Agenda setting
  2. Growing a crowdsourcing community
  3. Facilitators/subject matter experts (SME)
  4. Microtasking

In addition to boosting credibility, building a framework of accountability measures can help planners and crowdsourcing communities clearly define their work, engage the community, sustain community engagement, acquire help with tasks, obtain diverse opinions, and become more inclusive….(More)”

Can An Online Game Help Create A Better Test For TB?


Esther Landhuis at NPR: “Though it’s the world’s top infectious killer, tuberculosis is surprisingly tricky to diagnose. Scientists think that video gamers can help them create a better diagnostic test.

An online puzzle released Monday will see whether the researchers are right. Players of a Web-based game called EteRNA will try to design a sensor molecule that could potentially make diagnosing TB as easy as taking a home pregnancy test. The TB puzzle marks the launch of “EteRNA Medicine.”

The idea of rallying gamers to fight TB arose as two young Stanford University professors chatted over dinner at a conference last May. Rhiju Das, a biochemist who helped create EteRNA, told bioinformatician Purvesh Khatri about the game, which challenges nonexperts to design RNA molecules that fold into target shapes.

RNA molecules play key roles in biology and disease. Some brain disorders can be traced to problems with RNA folding. Viruses such as H1N1 flu and HIV depend on RNA elements to replicate and infect cells.

Das wants to “fight fire with fire” — that is, to disrupt the RNA involved in a disease or virus by crafting new tools that are themselves made of RNA molecules. EteRNA players learn RNA design principles with each puzzle they solve.

Khatri was intrigued by the notion of engaging the public to solve problems. His lab develops novel diagnostics using publicly available data sets. The team had just published a paper on a set of genes that could help diagnose sepsis and had other papers under review on influenza and TB.

In an “Aha!” moment during their dinner chat, Khatri says, he and Das realized “how awesome it would be to sequentially merge our two approaches — to use public data to find a diagnostic marker for a disease, and then use the public’s help to develop the test.”

TB seemed opportune as it has a simple diagnostic signature — a set of three human genes that turn up or down predictably after TB infection. When checked across gene data on thousands of blood samples from 14 groups of people around the globe, the behavior of the three-gene set readily identified people with active TB, distinguishing them from individuals who had latent TB or other diseases.

Those findings, published in February, have gotten serious attention — not only from curious patients and doctors but also from humanitarian groups eager to help bring a better TB test to market. It can currently take several tests to tell whether a person has active TB, including a chest X-ray and sputum test. The Bill & Melinda Gates Foundation has started sending data to help the Stanford team validate a test based on the newly identified TB gene signature, says study leader Khatri, who works at the university’s Center for Biomedical Informatics Research….(More)”

Crowdsourcing global governance: sustainable development goals, civil society, and the pursuit of democratic legitimacy


Paper by Joshua C. Gellers in International Environmental Agreements: Politics, Law and Economics: “To what extent can crowdsourcing help members of civil society overcome the democratic deficit in global environmental governance? In this paper, I evaluate the utility of crowdsourcing as a tool for participatory agenda-setting in the realm of post-2015 sustainable development policy. In particular, I analyze the descriptive representativeness (e.g., the degree to which participation mirrors the demographic attributes of non-state actors comprising global civil society) of participants in two United Nations orchestrated crowdsourcing processes—the MY World survey and e-discussions regarding environmental sustainability. I find that there exists a perceptible demographic imbalance among contributors to the MY World survey and considerable dissonance between the characteristics of participants in the e-discussions and those whose voices were included in the resulting summary report. The results suggest that although crowdsourcing may present an attractive technological approach to expand participation in global governance, ultimately the representativeness of that participation and the legitimacy of policy outputs depend on the manner in which contributions are solicited and filtered by international institutions….(More)”

NEW Platform for Sharing Research on Opening Governance: The Open Governance Research Exchange (OGRX)


Andrew Young: “Today,  The GovLab, in collaboration with founding partners mySociety and the World Bank’s Digital Engagement Evaluation Team are launching the Open Governance Research Exchange (OGRX), a new platform for sharing research and findings on innovations in governance.

From crowdsourcing to nudges to open data to participatory budgeting, more open and innovative ways to tackle society’s problems and make public institutions more effective are emerging. Yet little is known about what innovations actually work, when, why, for whom and under what conditions.

And anyone seeking existing research is confronted with sources that are widely dispersed across disciplines, often locked behind pay walls, and hard to search because of the absence of established taxonomies. As the demand to confront problems in new ways grows so too does the urgency for making learning about governance innovations more accessible.

As part of GovLab’s broader effort to move from “faith-based interventions” toward more “evidence-based interventions,” OGRX curates and makes accessible the most diverse and up-to-date collection of findings on innovating governance. At launch, the site features over 350 publications spanning a diversity of governance innovation areas, including but not limited to:

Visit ogrx.org to explore the latest research findings, submit your own work for inclusion on the platform, and share knowledge with others interested in using science and technology to improve the way we govern. (More)”

Crowdsourcing Solutions and Crisis Information during the Renaissance


Patrick Meier: “Clearly, crowdsourcing is not new, only the word is. After all, crowdsourcing is a methodology, not a technology nor an industry. Perhaps one of my favorite examples of crowdsourcing during the Renaissance surrounds the invention of the marine chronometer, which completely revolutionized long distance sea travel. Thousands of lives were being lost in shipwrecks because longitude coordinates were virtually impossible to determine in the open seas. Finding a solution this problem became critical as the Age of Sail dawned on many European empires.

So the Spanish King, Dutch Merchants and others turned to crowdsourcing by offering major prize money for a solution. The British government even launched the “Longitude Prize” which was established through an Act of Parliament in 1714 and administered by the “Board of Longitude.” This board brought together the greatest scientific minds of the time to work on the problem, including Sir Isaac Newton. Galileo was also said to have taken up the challenge.

The main prizes included: “£10,000 for a method that could determine longitude within 60 nautical miles (111 km); £15,000 for a method that could determine longitude within 40 nautical miles (74 km); and £20,000 for a method that could determine longitude within 30 nautical miles (56 km).” Note that £20,000 in 1714 is around $4.7 million dollars today. The $1 million Netflix Prize launched 400 years later pales in comparison.” In addition, the Board had the discretion to make awards to persons who were making significant contributions to the effort or to provide financial support to those who were working towards a solution. The Board could also make advances of up to £2,000 for experimental work deemed promising.”

Interestingly, the person who provided the most breakthroughs—and thus received the most prize money—was the son of a carpenter, the self-educated British clockmaker John Harrison.  And so, as noted by Peter LaMotte, “by allowing anyone to participate in solving the problem, a solution was found for a puzzle that had baffled some of the brightest minds in history (even Galileo!). In the end, it was found by someone who would never have been tapped to solve it to begin with.”…(More)”

The Evolution of Wikipedia’s Norm Network


Bradi Heaberlin and Simon DeDeo at Future Internet: “Social norms have traditionally been difficult to quantify. In any particular society, their sheer number and complex interdependencies often limit a system-level analysis. One exception is that of the network of norms that sustain the online Wikipedia community. We study the fifteen-year evolution of this network using the interconnected set of pages that establish, describe, and interpret the community’s norms. Despite Wikipedia’s reputation for ad hocgovernance, we find that its normative evolution is highly conservative. The earliest users create norms that both dominate the network and persist over time. These core norms govern both content and interpersonal interactions using abstract principles such as neutrality, verifiability, and assume good faith. As the network grows, norm neighborhoods decouple topologically from each other, while increasing in semantic coherence. Taken together, these results suggest that the evolution of Wikipedia’s norm network is akin to bureaucratic systems that predate the information age….(More)”

Mexico City is crowdsourcing its new constitution using Change.org in a democracy experiment


Ana Campoy at Quartz: “Mexico City just launched a massive experiment in digital democracy. It is asking its nearly 9 million residents to help draft a new constitution through social media. The crowdsourcing exercise is unprecedented in Mexico—and pretty much everywhere else.

as locals are known, can petition for issues to be included in the constitution through Change.org (link inSpanish), and make their case in person if they gather more than 10,000 signatures. They can also annotate proposals by the constitution drafters via PubPub, an editing platform (Spanish) similar to GoogleDocs.

The idea, in the words of the mayor, Miguel Angel Mancera, is to“bestow the constitution project (Spanish) with a democratic,progressive, inclusive, civic and plural character.”

There’s a big catch, however. The constitutional assembly—the body that has the final word on the new city’s basic law—is under no obligation to consider any of the citizen input. And then there are the practical difficulties of collecting and summarizing the myriad of views dispersed throughout one of the world’s largest cities.

That makes Mexico City’s public-consultation experiment a big test for the people’s digital power, one being watched around the world.Fittingly, the idea of crowdsourcing a constitution came about in response to an attempt to limit people power.

Fittingly, the idea of crowdsourcing a constitution came about in response to an attempt to limit people power.
For decades, city officials had fought to get out from under the thumb of the federal government, which had the final word on decisions such as who should be the city’s chief of police. This year, finally, they won a legal change that turns the Distrito Federal (federal district), similar to the US’s District of Columbia, into Ciudad de México (Mexico City), a more autonomous entity, more akin to a state. (Confusingly, it’s just part of the larger urban area also colloquially known as Mexico City, which spills into neighboring states.)

However, trying to retain some control, the Mexican congress decided that only 60% of the delegates to the city’s constitutional assembly would be elected by popular vote. The rest will be assigned by the president, congress, and Mancera, the mayor. Mancera is also the only one who can submit a draft constitution to the assembly.

Mancera’s response was to create a committee of some 30 citizens(Spanish), including politicians, human-rights advocates, journalists,and even a Paralympic gold medalist, to write his draft. He also calledfor the development of mechanisms to gather citizens’ “aspirations,values, and longing for freedom and justice” so they can beincorporated into the final document.

 The mechanisms, embedded in an online platform (Spanish) that offersvarious ways to weigh in, were launched at the end of March and willcollect inputs until September 1. The drafting group has until themiddle of that month to file its text with the assembly, which has toapprove the new constitution by the end of January.
 An experiment with few precedents

Mexico City didn’t have a lot of examples to draw on, since not a lot ofplaces have experience with crowdsourcing laws. In the US, a few locallawmakers have used Wiki pages and GitHub to draft bills, says MarilynBautista, a lecturer at Stanford Law School who has researched thepractice. Iceland—with a population some 27 times smaller than MexicoCity’s—famously had its citizens contribute to its constitution withinput from social media. The effort failed after the new constitution gotstuck in parliament.

In Mexico City, where many citizens already feel left out, the first bighurdle is to convince them it’s worth participating….

Then comes the task of making sense of the cacophony that will likelyemerge. Some of the input can be very easily organized—the results ofthe survey, for example, are being graphed in real time. But there could be thousands of documents and comments on the Change.org petitionsand the editing platform.

 Ideas are grouped into 18 topics, such as direct democracy,transparency and economic rights. They are prioritized based on theamount of support they’ve garnered and how relevant they are, saidBernardo Rivera, an adviser for the city. Drafters get a weekly deliveryof summarized citizen petitions….
An essay about human rights on the PubPub platform.(PubPub)

The most elaborate part of the system is PubPub, an open publishing platform similar to Google Docs, which is based on a project originally developed by MIT’s Media Lab. The drafters are supposed to post essays on how to address constitutional issues, and potentially, the constitution draft itself, once there is one. Only they—or whoever they authorize—will be able to reword the original document.

User comments and edits are recorded on a side panel, with links to the portion of text they refer to. Another screen records every change, so everyone can track which suggestions have made it into the text. Members of the public can also vote comments up or down, or post their own essays….(More).

Science to the People


David Lang on how citizen science bridges the gap between science and society: “It’s hard to find a silver lining in the water crisis in Flint, Michigan. The striking images of jugs of brown water being held high in protest are a symbol of institutional failure on a grand scale. It’s a disaster. But even as questions of accountability and remedy remain unanswered, there is already one lesson we can take away: Citizen science can be used as a powerful tool to build (or rebuild) the public’s trust in science.

Because the other striking image from Flint is this: Citizen-scientists  sampling and testing their own water, from their homes and neighborhoods,and reporting the results as scientific data. Dr. Marc Edwards is the VirginiaTech civil engineering professor who led the investigation into the lead levels in Flint’s water supply, and in a February 2016 interview with TheChronicle of Higher Education, he gave an important answer about the methods his team used to obtain the data: “Normal people really appreciate good science that’s done in their interest. They stepped forward as citizen-scientists to explore what was happening to them and to their community,we provided some funding and the technical and analytical expertise, and they did all the work. I think that work speaks for itself.”

It’s a subtle but important message: The community is rising up and rallying by using science, not by reacting to it. Other scientists trying to highlight important issues and influence public opinion would do well to take note, because there’s a disconnect between what science reports and what the general public chooses to believe. For instance, 97 percent of scientists agree that the world’s climate is warming, likely due to human activities. Yet only 70 percent of Americans believe that global warming is real. Many of the most important issues of our time have the same, growing gap between scientific and societal consensus: genetically modified foods, evolution,vaccines are often widely distrusted or disputed despite strong, positive scientific evidence…..

The good news is that we’re learning. Citizen science — the growing trend of involving non-professional scientists in the process of discovery — is proving to be a supremely effective tool. It now includes far more than birders and backyard astronomers, its first amateur champions. Over the past few years,the discipline has been gaining traction and popularity in academic circles too. Involving groups of amateur volunteers is now a proven strategy for collecting data over large geographic areas or over long periods of time.Online platforms like Zooniverse have shown that even an untrained human eye can spot anomalies in everything from wildebeest migrations to Martiansurfaces. For certain types of research, citizen science just works.

While a long list of peer-reviewed papers now backs up the efficacy of citizen science, and a series of papers has shown its positive impact on students’ view of science, we’re just beginning to understand the impact of that participation on the wider perception of science. Truthfully, for now,most of what we know so far about its public impact is anecdotal, as in the work in Flint, or even on our online platform for explorers, OpenExplorer….It makes sense that citizen science should affect public perception of science.The difference between “here are the results of a study” and “please help

It makes sense that citizen science should affect public perception of science.The difference between “here are the results of a study” and “please help us in the process of discovery” is profound. It’s the difference between a rote learning moment and an immersive experience. And even if not everyone is getting involved, the fact that this is possible and that some members of a community are engaging makes science instantly more relatable. It creates what Tim O’Reilly calls an “architecture of participation.” Citizen scientists create the best interface for convincing the rest of the populace.

A recent article in Nature argued that the DIY biology community was, in fact, ahead of the scientific establishment in terms of proactively thinking about the safety and ethics of rapidly advancing biotechnology tools. They had to be. For those people opening up community labs so that anyone can come and participate, public health issues can’t be pushed aside or dealt with later. After all, they are the public that will be affected….(More)”

Juries as Problem Solving Institutions


Series of interviews on Collective Problem Solving by Henry FarrellOver the last two years, a group of scholars from disciplines including political science, political theory, cognitive psychology, information science, statistics and computer science have met under the auspices of the MacArthur Foundation Research Network on Opening Governance. The goal of these meetings has been to bring the insights of different disciplines to bear on fundamental problems of collective problem solving. How do we best solve collective problems? How should we study and think about collective intelligence? How can we apply insights to real world problems? A wide body of work leads us to believe that complex problems are most likely to be solved when people with different viewpoints and sets of skills come together. This means that we can expect that the science of collective problem solving too will be improved when people from diverse disciplinary perspectives work together to generate new insights on shared problems.

Political theorists are beginning to think in different ways about institutions such as juries. Here, the crucial insights will involve how these institutions can address the traditional concerns of political theory, such as justice and recognition, while also solving the complex problem of figuring out how best to resolve disputes, and establishing the guilt or innocence of parties in criminal cases.

Melissa Schwartzberg is an associate professor of political science at New York University, working on the political theory of democratic decision making. I asked her a series of questions about the jury as a problem-solving institution.

Henry: Are there any general ways for figuring out the kinds of issues that juries (based on random selection of citizens and some voting rule) are good at deciding on, and the issues that they might have problems with?

Melissa: This is a difficult question, in part because we don’t have unmediated access to the “true state of the world”: our evidence about jury competence essentially derives from the correlation of jury verdicts with what the judge would have rendered, but obviously that doesn’t mean that the judge was correct. One way around the question is to ask instead what, historically, have been the reasons why we would wish to assign judgment to laypersons: what the “jury of one’s peers” signifies. Placing a body of ordinary citizens between the state and the accused serves an important protective device, so the use of the jury is quite clearly not all about judgment. But there is a long history of thinking that juries have special access to local knowledge – the established norms, practices, and expectations of a community, but in early periods knowledge of the parties and the alleged crime – that helps to shed light on why we still think “vicinage” is important…..(More)”