Crowdsourcing City Government: Using Tournaments to Improve Inspection Accuracy


Edward Glaeser, Andrew Hillis, Scott Kominers and Michael Luca in American Economic Review: Papers and Proceedings:The proliferation of big data makes it possible to better target city services like hygiene inspections, but city governments rarely have the in-house talent needed for developing prediction algorithms. Cities could hire consultants, but a cheaper alternative is to crowdsource competence by making data public and offering a reward for the best algorithm. A simple model suggests that open tournaments dominate consulting contracts when cities can tolerate risk and when there is enough labor with low opportunity costs. We also report on an inexpensive Boston-based restaurant tournament, which yielded algorithms that proved reasonably accurate when tested “out-of-sample” on hygiene inspections….(More)”

 

Platform for Mumbai’s slum entrepreneurs


Springwise: “We recently saw an initiative that empowered startup talent in a Finnish refugee camp, and now Design Museum Dharavi is a mobile museum that will provide a platform for makers in the Mumbai neighborhood.

The initiative is a brainchild of artist Jorge Rubio and Creative Industries Fund NL. Taking the model of a pop-up, it will stop at various locations throughout the neighborhood. Despite being an ‘informal settlement’, Dharavi is famed for producing very little waste due to a culture of recycling and repurposing. The mobile museum will showcase local makers, enable them to connect with potential clients and run workshops, ultimately elevating the global social perception towards life in the so-called ‘slums’. Home to over a million people, Dharavi has the additional tourism pull from appearing on the film Slumdog Millionaire…..(More)”

A Gargantuan Challenge for The Megalopolis: Mexico City calls citizens to help map its complex public bus system


“Mexico City, the largest and oldest urban agglomeration in the American continent. The city is home to an incredible diversity of people and cultures, and its size and its diversity also poses certain challenges. In a city with such big scale (the metropolitan area measures 4,887 mi2) transportation is one of its main problems. Finding ways to improve how people move within requires imagination and cooperation from decision makers and society alike.

The scale and dynamism of Mexico City’s public transport system represents a challenge to generate quality information. Processes for the generation of mobility data are time-consuming and expensive. Given this scenario, the best alternative for the city is to include transport users in generating this information.

The megalopolis lacks an updated, open database of its more than 1,500 bus routes. To tackle this problem, Laboratorio para la Ciudad (Mexico City’s experimental office and creative think-tank, reporting to the Mayor) partnered with 12 organizations that include NGOs and  other government offices to develop Mapatón CDMX: a crowdsourcing and gamification experiment to map the city’s bus routes through civic collaboration and technology.

After one year of designing and testing a strategy, the team behind Mapatón CDMX is calling citizens to map the public transport system by participating on a city game from January 29th to February 14th 2016. The game’s goal is to map routes of licenced public transport (buses, minibuses and vans) from start to finish in order to score points, which is done through an app for Android devices that gathers GPS data from the user inside the bus.

The mappers will participate individually or in groups with friends and family for two weeks. As an incentive and once the mapping marathon is finished, those participants with higher scores will earn cash prizes and electronic devices. (A smart algorithm creates incentives to map the longest or most ignored routes, giving mappers extra points.) But what is most valuable: the data resulting will be openly available at the end of February 2016, much faster and cheaper than with traditional processes.

Mapatón CDMX is an innovative and effective way to generate updated and open information about transport routes as the game harnesses collective intelligence of the gargantuan city. Organisers consider that the open database may be used by anyone to create for example data driven policy, strategies for academic analysis, maps for users, applications, visualizations, among many other digital products….(More)”

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed


Paper by Ashley N.D Meyer et al in the Journal of Medical Internet Research: ” Background: Despite visits to multiple physicians, many patients remain undiagnosed. A new online program, CrowdMed, aims to leverage the “wisdom of the crowd” by giving patients an opportunity to submit their cases and interact with case solvers to obtain diagnostic possibilities.

Objective: To describe CrowdMed and provide an independent assessment of its impact.

Methods: Patients submit their cases online to CrowdMed and case solvers sign up to help diagnose patients. Case solvers attempt to solve patients’ diagnostic dilemmas and often have an interactive online discussion with patients, including an exchange of additional diagnostic details. At the end, patients receive detailed reports containing diagnostic suggestions to discuss with their physicians and fill out surveys about their outcomes. We independently analyzed data collected from cases between May 2013 and April 2015 to determine patient and case solver characteristics and case outcomes.

Results: During the study period, 397 cases were completed. These patients previously visited a median of 5 physicians, incurred a median of US $10,000 in medical expenses, spent a median of 50 hours researching their illnesses online, and had symptoms for a median of 2.6 years. During this period, 357 active case solvers participated, of which 37.9% (132/348) were male and 58.3% (208/357) worked or studied in the medical industry. About half (50.9%, 202/397) of patients were likely to recommend CrowdMed to a friend, 59.6% (233/391) reported that the process gave insights that led them closer to the correct diagnoses, 57% (52/92) reported estimated decreases in medical expenses, and 38% (29/77) reported estimated improvement in school or work productivity.

Conclusions: Some patients with undiagnosed illnesses reported receiving helpful guidance from crowdsourcing their diagnoses during their difficult diagnostic journeys. However, further development and use of crowdsourcing methods to facilitate diagnosis requires long-term evaluation as well as validation to account for patients’ ultimate correct diagnoses….(More)”

Human-machine superintelligence pegged as key to solving global problems


Ravi Mandalia at Dispatch Tribunal: “Global complex problems such as climate change and geopolitical conflicts need a new approach if we want to solve them and researchers have suggested that human-machine super intelligence could be the key.

These so called ‘wicked’ problems are some of the most dire ones that need our immediate attention and researchers from the Human Computation Institute (HCI) and Cornell University have presented their new vision of human computation that could help solve these problems in an article published in the journal Science.

Scientists behind the article have cited how power of human computation has helped push the traditional limits to new heights – something that was not achievable until now. Humans are still ahead of machines at great many things – cognitive abilities is one the key areas – but if their powers are combined with those of machines, the result would be multidimensional collaborative networks that achieve what traditional problem-solving cannot.

Researchers have already proved that micro-tasking has helped with some complex problems including build the world’s most complete map of human retinal neurons; however, this approach isn’t always viable to solve much more complex problems of today and entirely new and innovative approach is required to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).

Recently developed human computation technologies that provide real-time access to crowd-based inputs could enable creation of more flexible collaborative environments and such setups are more apt for addressing the most challenging issues.

This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.

“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.

YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.

HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system….(More)”

Can crowdsourcing decipher the roots of armed conflict?


Stephanie Kanowitz at GCN: “Researchers at Pennsylvania State University and the University of Texas at Dallas are proving that there’s accuracy, not just safety, in numbers. The Correlates of War project, a long-standing effort that studies the history of warfare, is now experimenting with crowdsourcing as a way to more quickly and inexpensively create a global conflict database that could help explain when and why countries go to war.

The goal is to facilitate the collection, dissemination and use of reliable data in international relations, but a byproduct has emerged: the development of technology that uses machine learning and natural language processing to efficiently, cost-effectively and accurately create databases from news articles that detail militarized interstate disputes.

The project is in its fifth iteration, having released the fourth set of Militarized Dispute (MID) Data in 2014. To create those earlier versions, researchers paid subject-matter experts such as political scientists to read and hand code newswire articles about disputes, identifying features of possible militarized incidents. Now, however, they’re soliciting help from anyone and everyone — and finding the results are much the same as what the experts produced, except the results come in faster and with significantly less expense.

As news articles come across the wire, the researchers pull them and formulate questions about them that help evaluate the military events. Next, the articles and questions are loaded onto the Amazon Mechanical Turk, a marketplace for crowdsourcing. The project assigns articles to readers, who typically spend about 10 minutes reading an article and responding to the questions. The readers submit the answers to the project researchers, who review them. The project assigns the same article to multiple workers and uses computer algorithms to combine the data into one annotation.

A systematic comparison of the crowdsourced responses with those of trained subject-matter experts showed that the crowdsourced work was accurate for 68 percent of the news reports coded. More important, the aggregation of answers for each article showed that common answers from multiple readers strongly correlated with correct coding. This allowed researchers to easily flag the articles that required deeper expert involvement and process the majority of the news items in near-real time and at limited cost….(more)”

How Much Development Data Is Enough?


Keith D. Shepherd at Project Syndicate: “Rapid advances in technology have dramatically lowered the cost of gathering data. Sensors in space, the sky, the lab, and the field, along with newfound opportunities for crowdsourcing and widespread adoption of the Internet and mobile telephones, are making large amounts of information available to those for whom it was previously out of reach. A small-scale farmer in rural Africa, for example, can now access weather forecasts and market prices at the tap of a screen.

This data revolution offers enormous potential for improving decision-making at every level – from the local farmer to world-spanning development organizations. But gathering data is not enough. The information must also be managed and evaluated – and doing this properly can be far more complicated and expensive than the effort to collect it. If the decisions to be improved are not first properly identified and analyzed, there is a high risk that much of the collection effort could be wasted or misdirected.

This conclusion is itself based on empirical analysis. The evidence is weak, for example, that monitoring initiatives in agriculture or environmental management have had a positive impact. Quantitative analysis of decisions across many domains, including environmental policy, business investments, and cyber security, has shown that people tend to overestimate the amount of data needed to make a good decision or misunderstand what type of data are needed.

Furthermore, grave errors can occur when large data sets are mined using machine algorithms without having first having properly examined the decision that needs to be made. There are many examples of cases in which data mining has led to the wrong conclusion – including in medical diagnoses or legal cases – because experts in the field were not consulted and critical information was left out of the analysis.

Decision science, which combines understanding of behavior with universal principles of coherent decision-making, limits these risks by pairing empirical data with expert knowledge. If the data revolution is to be harnessed in the service of sustainable development, the best practices of this field must be incorporated into the effort.

The first step is to identify and frame frequently recurring decisions. In the field of development, these include large-scale decisions such as spending priorities – and thus budget allocations – by governments and international organizations. But it also includes choices made on a much smaller scale: farmers pondering which crops to plant, how much fertilizer to apply, and when and where to sell their produce.

The second step is to build a quantitative model of the uncertainties in such decisions, including the various triggers, consequences, controls, and mitigants, as well as the different costs, benefits, and risks involved. Incorporating – rather than ignoring – difficult-to-measure, highly uncertain factors leads to the best decisions…..

The third step is to compute the value of obtaining additional information – something that is possible only if the uncertainties in all of the variables have been quantified. The value of information is the amount a rational decision-maker would be willing to pay for it. So we need to know where additional data will have value for improving a decision and how much we should spend to get it. In some cases, no further information may be needed to make a sound decision; in others, acquiring further data could be worth millions of dollars….(More)”

Collective Intelligence in Law Reforms: When the Logic of the Crowds and the Logic of Policymaking Collide


Paper by Tanja Aitamurto: “…shows how the two virtues of collective intelligence – cognitive diversity and large crowds –turn into perils in crowdsourced policymaking. That is because of a conflict between the logic of the crowds and the logic of policymaking. The crowd’s logic differs from that of traditional policymaking in several aspects. To mention some of those: In traditional policymaking it is a small group of experts making proposals to the policy, whereas in crowdsourced policymaking, it is a large, anonymous crowd with a mixed level of expertise. The crowd proposes atomic ideas, whereas traditional policymaking is used to dealing with holistic and synthesized proposals. By drawing on data from a crowdsourced law-making process in Finland, the paper shows how the logics of the crowds and policymaking collide in practice. The conflict prevents policymaking fully benefiting from the crowd’s input, and it also hinders governments from adopting crowdsourcing more widely as a practice for deploying open policymaking practices….(More)”

What Citizens Can Teach Civil Servants About Open Government


 and  in Governing: “An open government is one that is transparent, participatory and collaborative. But moving from traditional government operating behind closed doors to more open institutions, where civil servants work together with citizens to create policies and solve problems, demands new skills and sensibilities.

As more and more American public-sector leaders embrace the concept of openness as a positive force for governmental effectiveness, they would do well to look toward Brazil’s largest city, where an unusual experiment was just launched: an effort to use a variation on crowdsourcing to retrain Sao Paulo’s 150,000 civil servants. It’s described as the world’s largest open-government training program.

The program, known as Agents of Open Government – part of a wider city initiative called “Sao Paulo Aberta” (Open Sao Paulo) — aims to teach through peer-to-peer learning, where government employees learn from citizens. Twenty-four citizen-led courses that began last month are aimed not only at government employees and elected community representatives but also at social activists and the general population.

Sao Paolo is betting on the radical notion that learning can happen outside of formal civil-service training colleges. This initiative reflects a growing global trend toward recognizing that institutions can become smarter — more effective and efficient — by making use of the skills and experience of those outside of government.

Officials hope to have 25,000 participants over the course of the coming year. To encourage public employees’ participation, city workers who attend the courses gain credits in the municipal evaluation system that allow them to get pay raises….(More)”

Digilantism: An Analysis of Crowdsourcing and the Boston Marathon Bombings


Paper by Johnny Nhan et al: “This paper explores the aftermath of the Boston Marathon bombing incident and how members of the general public, through the online community Reddit, attempted to provide assistance to law enforcement through conducting their own parallel investigations. As we document through an analysis of user posts, Reddit members shared information about the investigation, searched for information that would identify the perpetrators and, in some cases, drew on their own expert knowledge to uncover clues concerning key aspects of the attack. Although it is the case that the Reddit cyber-sleuths’ did not ultimately solve this case, or provide significant assistance to the police investigation, their actions suggest the potential role the public could play within security networks….(More)”