Platform for Mumbai’s slum entrepreneurs


Springwise: “We recently saw an initiative that empowered startup talent in a Finnish refugee camp, and now Design Museum Dharavi is a mobile museum that will provide a platform for makers in the Mumbai neighborhood.

The initiative is a brainchild of artist Jorge Rubio and Creative Industries Fund NL. Taking the model of a pop-up, it will stop at various locations throughout the neighborhood. Despite being an ‘informal settlement’, Dharavi is famed for producing very little waste due to a culture of recycling and repurposing. The mobile museum will showcase local makers, enable them to connect with potential clients and run workshops, ultimately elevating the global social perception towards life in the so-called ‘slums’. Home to over a million people, Dharavi has the additional tourism pull from appearing on the film Slumdog Millionaire…..(More)”

Innovating and changing the policy-cycle: Policy-makers be prepared!


Marijn Janssen and Natalie Helbig in Government Information Quarterly: “Many policy-makers are struggling to understand participatory governance in the midst of technological changes. Advances in information and communication technologies (ICTs) continue to have an impact on the ways that policy-makers and citizens engage with each other throughout the policy-making process. A set of developments in the areas of opening government data, advanced analytics, visualization, simulation, and gaming, and ubiquitous citizen access using mobile and personalized applications is shaping the interactions between policy-makers and citizens. Yet the impact of these developments on the policy-makers is unclear. The changing roles and need for new capabilities required from the government are analyzed in this paper using two case studies. Salient new roles for policy-makers are outlined focused on orchestrating the policy-making process. Research directions are identified including understand the behavior of users, aggregating and analyzing content from scattered resources, and the effective use of the new tools. Understanding new policy-makers roles will help to bridge the gap between the potential of tools and technologies and the organizational realities and political contexts. We argue that many examples are available that enable learning from others, in both directions, developed countries experiences are useful for developing countries and experiences from the latter are valuable for the former countries…(More)”

A Gargantuan Challenge for The Megalopolis: Mexico City calls citizens to help map its complex public bus system


“Mexico City, the largest and oldest urban agglomeration in the American continent. The city is home to an incredible diversity of people and cultures, and its size and its diversity also poses certain challenges. In a city with such big scale (the metropolitan area measures 4,887 mi2) transportation is one of its main problems. Finding ways to improve how people move within requires imagination and cooperation from decision makers and society alike.

The scale and dynamism of Mexico City’s public transport system represents a challenge to generate quality information. Processes for the generation of mobility data are time-consuming and expensive. Given this scenario, the best alternative for the city is to include transport users in generating this information.

The megalopolis lacks an updated, open database of its more than 1,500 bus routes. To tackle this problem, Laboratorio para la Ciudad (Mexico City’s experimental office and creative think-tank, reporting to the Mayor) partnered with 12 organizations that include NGOs and  other government offices to develop Mapatón CDMX: a crowdsourcing and gamification experiment to map the city’s bus routes through civic collaboration and technology.

After one year of designing and testing a strategy, the team behind Mapatón CDMX is calling citizens to map the public transport system by participating on a city game from January 29th to February 14th 2016. The game’s goal is to map routes of licenced public transport (buses, minibuses and vans) from start to finish in order to score points, which is done through an app for Android devices that gathers GPS data from the user inside the bus.

The mappers will participate individually or in groups with friends and family for two weeks. As an incentive and once the mapping marathon is finished, those participants with higher scores will earn cash prizes and electronic devices. (A smart algorithm creates incentives to map the longest or most ignored routes, giving mappers extra points.) But what is most valuable: the data resulting will be openly available at the end of February 2016, much faster and cheaper than with traditional processes.

Mapatón CDMX is an innovative and effective way to generate updated and open information about transport routes as the game harnesses collective intelligence of the gargantuan city. Organisers consider that the open database may be used by anyone to create for example data driven policy, strategies for academic analysis, maps for users, applications, visualizations, among many other digital products….(More)”

What Is Citizen Science? – A Scientometric Meta-Analysis


Christopher Kullenberg and Dick Kasperowski at PLOS One: “The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health.

Objective

In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms.

Results

Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data….(More)”

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed


Paper by Ashley N.D Meyer et al in the Journal of Medical Internet Research: ” Background: Despite visits to multiple physicians, many patients remain undiagnosed. A new online program, CrowdMed, aims to leverage the “wisdom of the crowd” by giving patients an opportunity to submit their cases and interact with case solvers to obtain diagnostic possibilities.

Objective: To describe CrowdMed and provide an independent assessment of its impact.

Methods: Patients submit their cases online to CrowdMed and case solvers sign up to help diagnose patients. Case solvers attempt to solve patients’ diagnostic dilemmas and often have an interactive online discussion with patients, including an exchange of additional diagnostic details. At the end, patients receive detailed reports containing diagnostic suggestions to discuss with their physicians and fill out surveys about their outcomes. We independently analyzed data collected from cases between May 2013 and April 2015 to determine patient and case solver characteristics and case outcomes.

Results: During the study period, 397 cases were completed. These patients previously visited a median of 5 physicians, incurred a median of US $10,000 in medical expenses, spent a median of 50 hours researching their illnesses online, and had symptoms for a median of 2.6 years. During this period, 357 active case solvers participated, of which 37.9% (132/348) were male and 58.3% (208/357) worked or studied in the medical industry. About half (50.9%, 202/397) of patients were likely to recommend CrowdMed to a friend, 59.6% (233/391) reported that the process gave insights that led them closer to the correct diagnoses, 57% (52/92) reported estimated decreases in medical expenses, and 38% (29/77) reported estimated improvement in school or work productivity.

Conclusions: Some patients with undiagnosed illnesses reported receiving helpful guidance from crowdsourcing their diagnoses during their difficult diagnostic journeys. However, further development and use of crowdsourcing methods to facilitate diagnosis requires long-term evaluation as well as validation to account for patients’ ultimate correct diagnoses….(More)”

Human-machine superintelligence pegged as key to solving global problems


Ravi Mandalia at Dispatch Tribunal: “Global complex problems such as climate change and geopolitical conflicts need a new approach if we want to solve them and researchers have suggested that human-machine super intelligence could be the key.

These so called ‘wicked’ problems are some of the most dire ones that need our immediate attention and researchers from the Human Computation Institute (HCI) and Cornell University have presented their new vision of human computation that could help solve these problems in an article published in the journal Science.

Scientists behind the article have cited how power of human computation has helped push the traditional limits to new heights – something that was not achievable until now. Humans are still ahead of machines at great many things – cognitive abilities is one the key areas – but if their powers are combined with those of machines, the result would be multidimensional collaborative networks that achieve what traditional problem-solving cannot.

Researchers have already proved that micro-tasking has helped with some complex problems including build the world’s most complete map of human retinal neurons; however, this approach isn’t always viable to solve much more complex problems of today and entirely new and innovative approach is required to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).

Recently developed human computation technologies that provide real-time access to crowd-based inputs could enable creation of more flexible collaborative environments and such setups are more apt for addressing the most challenging issues.

This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.

“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.

YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.

HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system….(More)”

Can crowdsourcing decipher the roots of armed conflict?


Stephanie Kanowitz at GCN: “Researchers at Pennsylvania State University and the University of Texas at Dallas are proving that there’s accuracy, not just safety, in numbers. The Correlates of War project, a long-standing effort that studies the history of warfare, is now experimenting with crowdsourcing as a way to more quickly and inexpensively create a global conflict database that could help explain when and why countries go to war.

The goal is to facilitate the collection, dissemination and use of reliable data in international relations, but a byproduct has emerged: the development of technology that uses machine learning and natural language processing to efficiently, cost-effectively and accurately create databases from news articles that detail militarized interstate disputes.

The project is in its fifth iteration, having released the fourth set of Militarized Dispute (MID) Data in 2014. To create those earlier versions, researchers paid subject-matter experts such as political scientists to read and hand code newswire articles about disputes, identifying features of possible militarized incidents. Now, however, they’re soliciting help from anyone and everyone — and finding the results are much the same as what the experts produced, except the results come in faster and with significantly less expense.

As news articles come across the wire, the researchers pull them and formulate questions about them that help evaluate the military events. Next, the articles and questions are loaded onto the Amazon Mechanical Turk, a marketplace for crowdsourcing. The project assigns articles to readers, who typically spend about 10 minutes reading an article and responding to the questions. The readers submit the answers to the project researchers, who review them. The project assigns the same article to multiple workers and uses computer algorithms to combine the data into one annotation.

A systematic comparison of the crowdsourced responses with those of trained subject-matter experts showed that the crowdsourced work was accurate for 68 percent of the news reports coded. More important, the aggregation of answers for each article showed that common answers from multiple readers strongly correlated with correct coding. This allowed researchers to easily flag the articles that required deeper expert involvement and process the majority of the news items in near-real time and at limited cost….(more)”

How Much Development Data Is Enough?


Keith D. Shepherd at Project Syndicate: “Rapid advances in technology have dramatically lowered the cost of gathering data. Sensors in space, the sky, the lab, and the field, along with newfound opportunities for crowdsourcing and widespread adoption of the Internet and mobile telephones, are making large amounts of information available to those for whom it was previously out of reach. A small-scale farmer in rural Africa, for example, can now access weather forecasts and market prices at the tap of a screen.

This data revolution offers enormous potential for improving decision-making at every level – from the local farmer to world-spanning development organizations. But gathering data is not enough. The information must also be managed and evaluated – and doing this properly can be far more complicated and expensive than the effort to collect it. If the decisions to be improved are not first properly identified and analyzed, there is a high risk that much of the collection effort could be wasted or misdirected.

This conclusion is itself based on empirical analysis. The evidence is weak, for example, that monitoring initiatives in agriculture or environmental management have had a positive impact. Quantitative analysis of decisions across many domains, including environmental policy, business investments, and cyber security, has shown that people tend to overestimate the amount of data needed to make a good decision or misunderstand what type of data are needed.

Furthermore, grave errors can occur when large data sets are mined using machine algorithms without having first having properly examined the decision that needs to be made. There are many examples of cases in which data mining has led to the wrong conclusion – including in medical diagnoses or legal cases – because experts in the field were not consulted and critical information was left out of the analysis.

Decision science, which combines understanding of behavior with universal principles of coherent decision-making, limits these risks by pairing empirical data with expert knowledge. If the data revolution is to be harnessed in the service of sustainable development, the best practices of this field must be incorporated into the effort.

The first step is to identify and frame frequently recurring decisions. In the field of development, these include large-scale decisions such as spending priorities – and thus budget allocations – by governments and international organizations. But it also includes choices made on a much smaller scale: farmers pondering which crops to plant, how much fertilizer to apply, and when and where to sell their produce.

The second step is to build a quantitative model of the uncertainties in such decisions, including the various triggers, consequences, controls, and mitigants, as well as the different costs, benefits, and risks involved. Incorporating – rather than ignoring – difficult-to-measure, highly uncertain factors leads to the best decisions…..

The third step is to compute the value of obtaining additional information – something that is possible only if the uncertainties in all of the variables have been quantified. The value of information is the amount a rational decision-maker would be willing to pay for it. So we need to know where additional data will have value for improving a decision and how much we should spend to get it. In some cases, no further information may be needed to make a sound decision; in others, acquiring further data could be worth millions of dollars….(More)”

Collective Intelligence in Law Reforms: When the Logic of the Crowds and the Logic of Policymaking Collide


Paper by Tanja Aitamurto: “…shows how the two virtues of collective intelligence – cognitive diversity and large crowds –turn into perils in crowdsourced policymaking. That is because of a conflict between the logic of the crowds and the logic of policymaking. The crowd’s logic differs from that of traditional policymaking in several aspects. To mention some of those: In traditional policymaking it is a small group of experts making proposals to the policy, whereas in crowdsourced policymaking, it is a large, anonymous crowd with a mixed level of expertise. The crowd proposes atomic ideas, whereas traditional policymaking is used to dealing with holistic and synthesized proposals. By drawing on data from a crowdsourced law-making process in Finland, the paper shows how the logics of the crowds and policymaking collide in practice. The conflict prevents policymaking fully benefiting from the crowd’s input, and it also hinders governments from adopting crowdsourcing more widely as a practice for deploying open policymaking practices….(More)”

Living Labs: Concepts, Tools and Cases


Introduction by , : “This special issue on “Living labs: concepts, tools and cases” comes 10 years after the first scientific publications that defined the notion of living labs, but more than 15 years after the appearance of the first living lab projects (Ballon et al., 2005; Eriksson et al., 2005). This five-year gap demonstrates the extent to which living labs have been a practice-driven phenomenon. Right up to this day, they represent a pragmatic approach to innovation (of information and communication technologies [ICTs] and other artefacts), characterised by a.o. experimentation in real life and active involvement of users.

While there is now a certain body of literature that attempts to clarify and analyse the concept (Følstad, 2008; Almirall et al., 2012; Leminen et al., 2012), living lab practices are still under-researched, and a theoretical and methodological gap continues to exist in terms of the restricted amount and visibility of living lab literature vis-à-vis the rather large community of practice (Schuurman, 2015). The present special issue aims to assist in filling that gap.

This does not mean that the development of living labs has not been informed by scholarly literature previously (Ballon, 2015). Cornerstones include von Hippel’s (1988) work on user-driven innovation because of its emphasis on the ability of so-called lead users, rather than manufacturers, to create (mainly ICT) innovations. Another cornerstone is Silverstone’s (1993) theory on the domestication of ICTs that frames technology adoption as an ongoing struggle between users and technology where the user attempts to take control of the technological artefact and the technology comes to be fitted to users’ daily routines. It has been said that, in living labs, von Hippel’s concept of user-driven design and Silverstone’s insights into the appropriation of technologies are coupled dynamically through experimentation (Frissen and Van Lieshout, 2006).

The concept of stigmergy, which refers to addressing complex problems by collective, yet uncoordinated, actions and interactions of communities of individuals, has gradually become the third foundational element, as social media have provided online platforms for stigmergic behaviour, which has subsequently been linked to the “spontaneous” emergence of innovations (Pallot et al., 2010; Kiemen and Ballon, 2012). A fourth cornerstone is the literature on open and business model innovation, which argues that today’s fast-paced innovation landscape requires collaboration between multiple business and institutional stakeholders, and that the business should use these joint innovation endeavours to find the right “business architecture” (Chesbrough, 2003; Mitchell and Coles, 2003).….(More)