Can Crowdsourcing Help Make Life Easier For People With Disabilities?


Sean Captain at FastCompany: “These days GPS technology can get you as close as about 10 feet from your destination, close enough to see it—assuming you can see.

But those last few feet are a chasm for the blind (and GPS accuracy sometimes falls only within about 30 feet).

“Actually finding the bus stop, not the right street, but standing in the right place when the bus comes, is pretty hard,” says Dave Power, president and CEO of the Perkins School for the Blind near Boston. Helen Keller’s alma mater is developing a mobile app that will provide audio directions—contributed by volunteers—so that blind people can get close enough to the stop for the bus driver to notice them.

Perkins’s app is one of 29 projects that recently received a total of $20 million in funding from Google.org’s Google Impact Challenge: Disabilities awards. Several of the winning initiatives rely on crowdsourced information to help the disabled—be they blind, in a wheelchair, or cognitively impaired. It’s a commonsense approach to tackling big logistical projects in a world full of people who have snippets of downtime during which they might perform bite-size acts of kindness online. But moving these projects from being just clever concepts to extensive services, based on the goodwill of volunteers, is going to be quite a hurdle.

People with limited mobility may have trouble traversing the last few feet between them and a wheelchair ramp, automatic doors, or other accommodations that aren’t easy to find (or may not even exist in some places).Wheelmap, based in Berlin, is trying to help by building online maps of accessible locations. Its website incorporates crowdsourced data. The site lets users type in a city and search for accessible amenities such as restaurants, hotels, and public transit.

Paris-based J’accede (which received 500,000 euros from Google, which is the equivalent of about $565,000) provides similar capabilities in both a website and an app, with a slicker design somewhat resembling TripAdvisor.

Both services have a long way to go. J’accede lists 374 accessible bars/restaurants in its hometown and a modest selection in other French cities like Marseille. “We still have a lot of work to do to cover France,” says J’accede’s president Damien Birambeau in an email. The goal is to go global though, and the site is available in English, German, and Spanish, in addition to French. Likewise, Wheelmap (which got 825,000 euros, or $933,000) performs best in the German capital of Berlin and cities like Hamburg, but is less useful in other places.

These sites face the same challenge as many other volunteer-based, crowdsourced projects: getting a big enough crowd to contribute information to the service. J’accede hopes to make the process easier. In June, it will connect itself with Google Places, so contributors will only need to supply details about accommodations at a site; information like the location’s address and phone number will be pulled in automatically. But both J’accede and Wheelmap recognize that crowdsourcing has its limits. They are now going beyond voluntary contributions, setting up automated systems to scrape information from other databases of accessible locations, such as those maintained by governments.

Wheelmap and J’accede are dwarfed by general-interest crowdsourced sites like TripAdvisor and Yelp, which offer some information about accessibility, too. For instance, among the many filters they offer users searching for restaurants—such as price range and cuisine type—TripAdvisor and Yelp both offer a Wheelchair Accessible checkbox. Applying that filter to Parisian establishments brings up about 1,000 restaurants on TripAdvisor and 2,800 in Yelp.

So what can Wheelmap and J’accede provide that the big players can’t? Details. “A person in a wheelchair, for example, will face different obstacles than a partially blind person or a person with cognitive disabilities,” says Birambeau. “These different needs and profiles means that we need highly detailed information about the accessibility of public places.”…(More)”

Design principles for engaging and retaining virtual citizen scientists


Dara M. WaldJustin Longo and A. R. Dobell at Conservation Biology: “Citizen science initiatives encourage volunteer participants to collect and interpret data and contribute to formal scientific projects. The growth of virtual citizen science (VCS), facilitated through websites and mobile applications since the mid-2000s, has been driven by a combination of software innovations and mobile technologies, growing scientific data flows without commensurate increases in resources to handle them, and the desire of internet-connected participants to contribute to collective outputs. However, the increasing availability of internet-based activities requires individual VCS projects to compete for the attention of volunteers and promote their long-term retention. We examined program and platform design principles that might allow VCS initiatives to compete more effectively for volunteers, increase productivity of project participants, and retain contributors over time. We surveyed key personnel engaged in managing a sample of VCS projects to identify the principles and practices they pursued for these purposes and led a team in a heuristic evaluation of volunteer engagement, website or application usability, and participant retention. We received 40 completed survey responses (33% response rate) and completed a heuristic evaluation of 20 VCS program sites. The majority of the VCS programs focused on scientific outcomes, whereas the educational and social benefits of program participation, variables that are consistently ranked as important for volunteer engagement and retention, were incidental. Evaluators indicated usability, across most of the VCS program sites, was higher and less variable than the ratings for participant engagement and retention. In the context of growing competition for the attention of internet volunteers, increased attention to the motivations of virtual citizen scientists may help VCS programs sustain the necessary engagement and retention of their volunteers….(More)”

Crowdsourced Deliberation: The Case of the Law on Off-Road Traffic in Finland


Tanja Aitamurto and Hélène Landemore in Policy & Internet: “This article examines the emergence of democratic deliberation in a crowdsourced law reform process. The empirical context of the study is a crowdsourced legislative reform in Finland, initiated by the Finnish government. The findings suggest that online exchanges in the crowdsourced process qualify as democratic deliberation according to the classical definition. We introduce the term “crowdsourced deliberation” to mean an open, asynchronous, depersonalized, and distributed kind of online deliberation occurring among self-selected participants in the context of an attempt by government or another organization to open up the policymaking or lawmaking process. The article helps to characterize the nature of crowdsourced policymaking and to understand its possibilities as a practice for implementing open government principles. We aim to make a contribution to the literature on crowdsourcing in policymaking, participatory and deliberative democracy and, specifically, the newly emerging subfield in deliberative democracy that focuses on “deliberative systems.”…(More)”

Four Steps to Enhanced Crowdsourcing


Kendra L. Smith and Lindsey Collins at Planetizen: “Over the past decade, crowdsourcing has grown to significance through crowdfunding, crowd collaboration, crowd voting, and crowd labor. The idea behind crowdsourcing is simple: decentralize decision-making by utilizing large groups of people to assist with solving problems, generating ideas, funding, generating data, and making decisions. We have seen crowdsourcing used in both the private and public sectors. In a previous article, “Empowered Design, By ‘the Crowd,'” we discuss the significant role crowdsourcing can play in urban planning through citizen engagement.

Crowdsourcing in the public sector represents a more inclusive form of governance that incorporates a multi-stakeholder approach; it goes beyond regular forms of community engagement and allows citizens to participate in decision-making. When citizens help inform decision-making, new opportunities are created for cities—opportunities that are beginning to unfold for planners. However, despite its obvious utility, planners underutilize crowdsourcing. A key reason for its underuse can be attributed to a lack of credibility and accountability in crowdsourcing endeavors.

Crowdsourcing credibility speaks to the capacity to trust a source and discern whether information is, indeed, true. While it can be difficult to know if any information is definitively true, indicators of fact or truth include where information was collected, how information was collected, and how rigorously it was fact-checking or peer reviewed. However, in the digital universe of today, individuals can make a habit of posting inaccurate, salacious, malicious, and flat-out false information. The realities of contemporary media make it more difficult to trust crowdsourced information for decision-making, especially for the public sector, where the use of inaccurate information can impact the lives of many and the trajectory of a city. As a result, there is a need to establish accountability measures to enhance crowdsourcing in urban planning.

Establishing Accountability Measures

For urban planners considering crowdsourcing, establishing a system of accountability measures might seem like more effort than it is worth. However, that is simply not true. Recent evidence has proven traditional community engagement (e.g., town halls, forums, city council meetings) is lower than ever. Current engagement also tends to focus on problems in the community rather than the development of the community. Crowdsourcing offers new opportunities for ongoing and sustainable engagement with the community. It can be simple as well.

The following four methods can be used separately or together (we hope they are used together) to help establish accountability and credibility in the crowdsourcing process:

  1. Agenda setting
  2. Growing a crowdsourcing community
  3. Facilitators/subject matter experts (SME)
  4. Microtasking

In addition to boosting credibility, building a framework of accountability measures can help planners and crowdsourcing communities clearly define their work, engage the community, sustain community engagement, acquire help with tasks, obtain diverse opinions, and become more inclusive….(More)”

Can An Online Game Help Create A Better Test For TB?


Esther Landhuis at NPR: “Though it’s the world’s top infectious killer, tuberculosis is surprisingly tricky to diagnose. Scientists think that video gamers can help them create a better diagnostic test.

An online puzzle released Monday will see whether the researchers are right. Players of a Web-based game called EteRNA will try to design a sensor molecule that could potentially make diagnosing TB as easy as taking a home pregnancy test. The TB puzzle marks the launch of “EteRNA Medicine.”

The idea of rallying gamers to fight TB arose as two young Stanford University professors chatted over dinner at a conference last May. Rhiju Das, a biochemist who helped create EteRNA, told bioinformatician Purvesh Khatri about the game, which challenges nonexperts to design RNA molecules that fold into target shapes.

RNA molecules play key roles in biology and disease. Some brain disorders can be traced to problems with RNA folding. Viruses such as H1N1 flu and HIV depend on RNA elements to replicate and infect cells.

Das wants to “fight fire with fire” — that is, to disrupt the RNA involved in a disease or virus by crafting new tools that are themselves made of RNA molecules. EteRNA players learn RNA design principles with each puzzle they solve.

Khatri was intrigued by the notion of engaging the public to solve problems. His lab develops novel diagnostics using publicly available data sets. The team had just published a paper on a set of genes that could help diagnose sepsis and had other papers under review on influenza and TB.

In an “Aha!” moment during their dinner chat, Khatri says, he and Das realized “how awesome it would be to sequentially merge our two approaches — to use public data to find a diagnostic marker for a disease, and then use the public’s help to develop the test.”

TB seemed opportune as it has a simple diagnostic signature — a set of three human genes that turn up or down predictably after TB infection. When checked across gene data on thousands of blood samples from 14 groups of people around the globe, the behavior of the three-gene set readily identified people with active TB, distinguishing them from individuals who had latent TB or other diseases.

Those findings, published in February, have gotten serious attention — not only from curious patients and doctors but also from humanitarian groups eager to help bring a better TB test to market. It can currently take several tests to tell whether a person has active TB, including a chest X-ray and sputum test. The Bill & Melinda Gates Foundation has started sending data to help the Stanford team validate a test based on the newly identified TB gene signature, says study leader Khatri, who works at the university’s Center for Biomedical Informatics Research….(More)”

Crowdsourcing global governance: sustainable development goals, civil society, and the pursuit of democratic legitimacy


Paper by Joshua C. Gellers in International Environmental Agreements: Politics, Law and Economics: “To what extent can crowdsourcing help members of civil society overcome the democratic deficit in global environmental governance? In this paper, I evaluate the utility of crowdsourcing as a tool for participatory agenda-setting in the realm of post-2015 sustainable development policy. In particular, I analyze the descriptive representativeness (e.g., the degree to which participation mirrors the demographic attributes of non-state actors comprising global civil society) of participants in two United Nations orchestrated crowdsourcing processes—the MY World survey and e-discussions regarding environmental sustainability. I find that there exists a perceptible demographic imbalance among contributors to the MY World survey and considerable dissonance between the characteristics of participants in the e-discussions and those whose voices were included in the resulting summary report. The results suggest that although crowdsourcing may present an attractive technological approach to expand participation in global governance, ultimately the representativeness of that participation and the legitimacy of policy outputs depend on the manner in which contributions are solicited and filtered by international institutions….(More)”

NEW Platform for Sharing Research on Opening Governance: The Open Governance Research Exchange (OGRX)


Andrew Young: “Today,  The GovLab, in collaboration with founding partners mySociety and the World Bank’s Digital Engagement Evaluation Team are launching the Open Governance Research Exchange (OGRX), a new platform for sharing research and findings on innovations in governance.

From crowdsourcing to nudges to open data to participatory budgeting, more open and innovative ways to tackle society’s problems and make public institutions more effective are emerging. Yet little is known about what innovations actually work, when, why, for whom and under what conditions.

And anyone seeking existing research is confronted with sources that are widely dispersed across disciplines, often locked behind pay walls, and hard to search because of the absence of established taxonomies. As the demand to confront problems in new ways grows so too does the urgency for making learning about governance innovations more accessible.

As part of GovLab’s broader effort to move from “faith-based interventions” toward more “evidence-based interventions,” OGRX curates and makes accessible the most diverse and up-to-date collection of findings on innovating governance. At launch, the site features over 350 publications spanning a diversity of governance innovation areas, including but not limited to:

Visit ogrx.org to explore the latest research findings, submit your own work for inclusion on the platform, and share knowledge with others interested in using science and technology to improve the way we govern. (More)”

Crowdsourcing Solutions and Crisis Information during the Renaissance


Patrick Meier: “Clearly, crowdsourcing is not new, only the word is. After all, crowdsourcing is a methodology, not a technology nor an industry. Perhaps one of my favorite examples of crowdsourcing during the Renaissance surrounds the invention of the marine chronometer, which completely revolutionized long distance sea travel. Thousands of lives were being lost in shipwrecks because longitude coordinates were virtually impossible to determine in the open seas. Finding a solution this problem became critical as the Age of Sail dawned on many European empires.

So the Spanish King, Dutch Merchants and others turned to crowdsourcing by offering major prize money for a solution. The British government even launched the “Longitude Prize” which was established through an Act of Parliament in 1714 and administered by the “Board of Longitude.” This board brought together the greatest scientific minds of the time to work on the problem, including Sir Isaac Newton. Galileo was also said to have taken up the challenge.

The main prizes included: “£10,000 for a method that could determine longitude within 60 nautical miles (111 km); £15,000 for a method that could determine longitude within 40 nautical miles (74 km); and £20,000 for a method that could determine longitude within 30 nautical miles (56 km).” Note that £20,000 in 1714 is around $4.7 million dollars today. The $1 million Netflix Prize launched 400 years later pales in comparison.” In addition, the Board had the discretion to make awards to persons who were making significant contributions to the effort or to provide financial support to those who were working towards a solution. The Board could also make advances of up to £2,000 for experimental work deemed promising.”

Interestingly, the person who provided the most breakthroughs—and thus received the most prize money—was the son of a carpenter, the self-educated British clockmaker John Harrison.  And so, as noted by Peter LaMotte, “by allowing anyone to participate in solving the problem, a solution was found for a puzzle that had baffled some of the brightest minds in history (even Galileo!). In the end, it was found by someone who would never have been tapped to solve it to begin with.”…(More)”

The Evolution of Wikipedia’s Norm Network


Bradi Heaberlin and Simon DeDeo at Future Internet: “Social norms have traditionally been difficult to quantify. In any particular society, their sheer number and complex interdependencies often limit a system-level analysis. One exception is that of the network of norms that sustain the online Wikipedia community. We study the fifteen-year evolution of this network using the interconnected set of pages that establish, describe, and interpret the community’s norms. Despite Wikipedia’s reputation for ad hocgovernance, we find that its normative evolution is highly conservative. The earliest users create norms that both dominate the network and persist over time. These core norms govern both content and interpersonal interactions using abstract principles such as neutrality, verifiability, and assume good faith. As the network grows, norm neighborhoods decouple topologically from each other, while increasing in semantic coherence. Taken together, these results suggest that the evolution of Wikipedia’s norm network is akin to bureaucratic systems that predate the information age….(More)”

Mexico City is crowdsourcing its new constitution using Change.org in a democracy experiment


Ana Campoy at Quartz: “Mexico City just launched a massive experiment in digital democracy. It is asking its nearly 9 million residents to help draft a new constitution through social media. The crowdsourcing exercise is unprecedented in Mexico—and pretty much everywhere else.

as locals are known, can petition for issues to be included in the constitution through Change.org (link inSpanish), and make their case in person if they gather more than 10,000 signatures. They can also annotate proposals by the constitution drafters via PubPub, an editing platform (Spanish) similar to GoogleDocs.

The idea, in the words of the mayor, Miguel Angel Mancera, is to“bestow the constitution project (Spanish) with a democratic,progressive, inclusive, civic and plural character.”

There’s a big catch, however. The constitutional assembly—the body that has the final word on the new city’s basic law—is under no obligation to consider any of the citizen input. And then there are the practical difficulties of collecting and summarizing the myriad of views dispersed throughout one of the world’s largest cities.

That makes Mexico City’s public-consultation experiment a big test for the people’s digital power, one being watched around the world.Fittingly, the idea of crowdsourcing a constitution came about in response to an attempt to limit people power.

Fittingly, the idea of crowdsourcing a constitution came about in response to an attempt to limit people power.
For decades, city officials had fought to get out from under the thumb of the federal government, which had the final word on decisions such as who should be the city’s chief of police. This year, finally, they won a legal change that turns the Distrito Federal (federal district), similar to the US’s District of Columbia, into Ciudad de México (Mexico City), a more autonomous entity, more akin to a state. (Confusingly, it’s just part of the larger urban area also colloquially known as Mexico City, which spills into neighboring states.)

However, trying to retain some control, the Mexican congress decided that only 60% of the delegates to the city’s constitutional assembly would be elected by popular vote. The rest will be assigned by the president, congress, and Mancera, the mayor. Mancera is also the only one who can submit a draft constitution to the assembly.

Mancera’s response was to create a committee of some 30 citizens(Spanish), including politicians, human-rights advocates, journalists,and even a Paralympic gold medalist, to write his draft. He also calledfor the development of mechanisms to gather citizens’ “aspirations,values, and longing for freedom and justice” so they can beincorporated into the final document.

 The mechanisms, embedded in an online platform (Spanish) that offersvarious ways to weigh in, were launched at the end of March and willcollect inputs until September 1. The drafting group has until themiddle of that month to file its text with the assembly, which has toapprove the new constitution by the end of January.
 An experiment with few precedents

Mexico City didn’t have a lot of examples to draw on, since not a lot ofplaces have experience with crowdsourcing laws. In the US, a few locallawmakers have used Wiki pages and GitHub to draft bills, says MarilynBautista, a lecturer at Stanford Law School who has researched thepractice. Iceland—with a population some 27 times smaller than MexicoCity’s—famously had its citizens contribute to its constitution withinput from social media. The effort failed after the new constitution gotstuck in parliament.

In Mexico City, where many citizens already feel left out, the first bighurdle is to convince them it’s worth participating….

Then comes the task of making sense of the cacophony that will likelyemerge. Some of the input can be very easily organized—the results ofthe survey, for example, are being graphed in real time. But there could be thousands of documents and comments on the Change.org petitionsand the editing platform.

 Ideas are grouped into 18 topics, such as direct democracy,transparency and economic rights. They are prioritized based on theamount of support they’ve garnered and how relevant they are, saidBernardo Rivera, an adviser for the city. Drafters get a weekly deliveryof summarized citizen petitions….
An essay about human rights on the PubPub platform.(PubPub)

The most elaborate part of the system is PubPub, an open publishing platform similar to Google Docs, which is based on a project originally developed by MIT’s Media Lab. The drafters are supposed to post essays on how to address constitutional issues, and potentially, the constitution draft itself, once there is one. Only they—or whoever they authorize—will be able to reword the original document.

User comments and edits are recorded on a side panel, with links to the portion of text they refer to. Another screen records every change, so everyone can track which suggestions have made it into the text. Members of the public can also vote comments up or down, or post their own essays….(More).