Human-machine superintelligence pegged as key to solving global problems


Ravi Mandalia at Dispatch Tribunal: “Global complex problems such as climate change and geopolitical conflicts need a new approach if we want to solve them and researchers have suggested that human-machine super intelligence could be the key.

These so called ‘wicked’ problems are some of the most dire ones that need our immediate attention and researchers from the Human Computation Institute (HCI) and Cornell University have presented their new vision of human computation that could help solve these problems in an article published in the journal Science.

Scientists behind the article have cited how power of human computation has helped push the traditional limits to new heights – something that was not achievable until now. Humans are still ahead of machines at great many things – cognitive abilities is one the key areas – but if their powers are combined with those of machines, the result would be multidimensional collaborative networks that achieve what traditional problem-solving cannot.

Researchers have already proved that micro-tasking has helped with some complex problems including build the world’s most complete map of human retinal neurons; however, this approach isn’t always viable to solve much more complex problems of today and entirely new and innovative approach is required to solve “wicked problems” – those that involve many interacting systems that are constantly changing, and whose solutions have unforeseen consequences (e.g., corruption resulting from financial aid given in response to a natural disaster).

Recently developed human computation technologies that provide real-time access to crowd-based inputs could enable creation of more flexible collaborative environments and such setups are more apt for addressing the most challenging issues.

This idea is already taking shape in several human computation projects, including YardMap.org, which was launched by the Cornell in 2012 to map global conservation efforts one parcel at a time.

“By sharing and observing practices in a map-based social network, people can begin to relate their individual efforts to the global conservation potential of living and working landscapes,” says Janis Dickinson, Professor and Director of Citizen Science at the Cornell Lab of Ornithology.

YardMap allows participants to interact and build on each other’s work – something that crowdsourcing alone cannot achieve. The project serves as an important model for how such bottom-up, socially networked systems can bring about scalable changes how we manage residential landscapes.

HCI has recently set out to use crowd-power to accelerate Cornell-based Alzheimer’s disease research. WeCureAlz.com combines two successful microtasking systems into an interactive analytic pipeline that builds blood flow models of mouse brains. The stardust@home system, which was used to search for comet dust in one million images of aerogel, is being adapted to identify stalled blood vessels, which will then be pinpointed in the brain by a modified version of the EyeWire system….(More)”

How Much Development Data Is Enough?


Keith D. Shepherd at Project Syndicate: “Rapid advances in technology have dramatically lowered the cost of gathering data. Sensors in space, the sky, the lab, and the field, along with newfound opportunities for crowdsourcing and widespread adoption of the Internet and mobile telephones, are making large amounts of information available to those for whom it was previously out of reach. A small-scale farmer in rural Africa, for example, can now access weather forecasts and market prices at the tap of a screen.

This data revolution offers enormous potential for improving decision-making at every level – from the local farmer to world-spanning development organizations. But gathering data is not enough. The information must also be managed and evaluated – and doing this properly can be far more complicated and expensive than the effort to collect it. If the decisions to be improved are not first properly identified and analyzed, there is a high risk that much of the collection effort could be wasted or misdirected.

This conclusion is itself based on empirical analysis. The evidence is weak, for example, that monitoring initiatives in agriculture or environmental management have had a positive impact. Quantitative analysis of decisions across many domains, including environmental policy, business investments, and cyber security, has shown that people tend to overestimate the amount of data needed to make a good decision or misunderstand what type of data are needed.

Furthermore, grave errors can occur when large data sets are mined using machine algorithms without having first having properly examined the decision that needs to be made. There are many examples of cases in which data mining has led to the wrong conclusion – including in medical diagnoses or legal cases – because experts in the field were not consulted and critical information was left out of the analysis.

Decision science, which combines understanding of behavior with universal principles of coherent decision-making, limits these risks by pairing empirical data with expert knowledge. If the data revolution is to be harnessed in the service of sustainable development, the best practices of this field must be incorporated into the effort.

The first step is to identify and frame frequently recurring decisions. In the field of development, these include large-scale decisions such as spending priorities – and thus budget allocations – by governments and international organizations. But it also includes choices made on a much smaller scale: farmers pondering which crops to plant, how much fertilizer to apply, and when and where to sell their produce.

The second step is to build a quantitative model of the uncertainties in such decisions, including the various triggers, consequences, controls, and mitigants, as well as the different costs, benefits, and risks involved. Incorporating – rather than ignoring – difficult-to-measure, highly uncertain factors leads to the best decisions…..

The third step is to compute the value of obtaining additional information – something that is possible only if the uncertainties in all of the variables have been quantified. The value of information is the amount a rational decision-maker would be willing to pay for it. So we need to know where additional data will have value for improving a decision and how much we should spend to get it. In some cases, no further information may be needed to make a sound decision; in others, acquiring further data could be worth millions of dollars….(More)”

How Facebook Makes Us Dumber


 in BloombergView: “Why does misinformation spread so quickly on the social media? Why doesn’t it get corrected? When the truth is so easy to find, why do people accept falsehoods?

A new study focusing on Facebook users provides strong evidence that the explanation is confirmation bias: people’s tendency to seek out information that confirms their beliefs, and to ignore contrary information.

Confirmation bias turns out to play a pivotal role in the creation of online echo chambers. This finding bears on a wide range of issues, including the current presidential campaign, the acceptance of conspiracy theories and competing positions in international disputes.

The new study, led by Michela Del Vicario of Italy’s Laboratory of Computational Social Science, explores the behavior of Facebook users from 2010 to 2014. One of the study’s goals was to test a question that continues to be sharply disputed: When people are online, do they encounter opposing views, or do they create the virtual equivalent of gated communities?

Del Vicario and her coauthors explored how Facebook users spread conspiracy theories (using 32 public web pages); science news (using 35 such pages); and “trolls,” which intentionally spread false information (using two web pages). Their data set is massive: It covers all Facebook posts during the five-year period. They explored which Facebook users linked to one or more of the 69 web pages, and whether they learned about those links from their Facebook friends.

In sum, the researchers find a lot of communities of like-minded people. Even if they are baseless, conspiracy theories spread rapidly within such communities.

More generally, Facebook users tended to choose and share stories containing messages they accept, and to neglect those they reject. If a story fits with what people already believe, they are far more likely to be interested in it and thus to spread it….(More)”

Open data set to reshape charity and activism in 2016


The Guardian: “In 2015 the EU launched the world’s first international data portal, the Chinese government pledged to make state data public, and the UK lost its open data crown to Taiwan. Troves of data were unlocked by governments around the world last year, but the usefulness of much of that data is still to be determined by the civic groups, businesses and governments who use it. So what’s in the pipeline? And how will the open data ecosystem grow in 2016? We asked the experts.

1. Data will be seen as infrastructure (Heather Savory, director general for data capability, Office for National Statistics)….

2. Journalists, charities and civil society bodies will embrace open data (Hetan Shah, executive director, the Royal Statistical Society)…3. Activists will take it upon themselves to create data

3. Activists will take it upon themselves to create data (Pavel Richter, chief executive, Open Knowledge International)….

 

4. Data illiteracy will come at a heavy price (Sir Nigel Shadbolt, principal, Jesus College, Oxford, professorial research fellow in computer science, University of Oxford and chairman and co-founder of the Open Data Institute…)

5. We’ll create better tools to build a web of data (Dr Elena Simperl, associate professor, electronics and computer science, University of Southampton) …(More)”

Playing ‘serious games,’ adults learn to solve thorny real-world problems


Lawrence Susskind and Ella Kim in The Conversation: “…We have been testing the use of role-playing games to promote collaborative decision-making by nations, states and communities. Unlike online computer games, players in role-playing games interact face-to-face in small groups of six to eight. The games place them in a hypothetical setting that simulates a real-life problem-solving situation. People are often assigned roles that are very different from their real-life roles. This helps them appreciate how their political adversaries view the problem.

Players receive briefing materials to read ahead of time so they can perform their assigned roles realistically. The idea is to reenact the tensions that actual stakeholders will feel when they are making real-life decisions. In the game itself, participants are asked to reach agreement in their roles in 60-90 minutes. (Other games, like the Mercury Game or the Chlorine Game, take longer to play.) If multiple small groups play the game at the same time, the entire room – which may include 100 tables of game players or more – can discuss the results together. In these debriefings, the most potent learning often occurs when players hear about creative moves that others have used to reach agreement.

It can take up to several months to design a game. Designers start by interviewing real-life decision makers to understand how they view the problem. Game designers must also synthesize a great deal of scientific and technical information to present it in the game in a form that anyone can understand. After the design phase, games have to be tested and refined before they are ready for play.

Research shows that this immersive approach to learning is particularly effective for adults. Our own research shows that elected and appointed officials, citizen advocates and corporate leaders can absorb a surprising amount of new scientific information when it is embedded in a carefully crafted role-playing game. In one study of more than 500 people in four New England coastal communities, we found that a significant portion of game players (1) changed their minds about how urgent a threat climate change is; (2) became more optimistic about their local government’s ability to reduce climate change risks; and (3) became more confident that conflicting groups would be able to reach agreement on how to proceed with climate adaptation….

Our conclusion is that “serious games” can prepare citizens and officialsto participate successfully in science-based problem-solving. In related research in Ghana and Vietnam, we found that role-playing games had similarly valuable effects. While the agreements reached in games do not necessarily indicate what actual agreements may be reached, they can help officials and stakeholder representatives get a much clearer sense of what might be possible.

We believe that role-playing games can be used in a wide range of situations. We have designed games that have been used in different parts of the world to help all kinds of interest groups work together to draft new environmental regulations. We have brought together adversaries in energy facility siting and waste cleanup disputes to play a game prior to facing off against each other in real life. This approach has also facilitated decisions in regional economic development disputes, water allocation disputes in an international river basin and disputes among aboriginal communities, national governments and private industry….(More)”

Five times Internet activism made a difference


The immediacy of social media, many activists say, allows a rapid spread of information not previously available, with updates possible in near-real time. From the Arab Spring to SOPA to #blacklivesmatter, here’s a look at how online activism has impacted social issues across the globe.

1. Black Lives Matter…

2. Arab Spring…

3. Taiwan’s student protests…

4. Net neutrality…

5. SOPA/PIPA…(More)”

Daedalus Issue on “The Internet”


Press release: “Thirty years ago, the Internet was a network that primarily delivered email among academic and government employees. Today, it is rapidly evolving into a control system for our physical environment through the Internet of Things, as mobile and wearable technology more tightly integrate the Internet into our everyday lives.

How will the future Internet be shaped by the design choices that we are making today? Could the Internet evolve into a fundamentally different platform than the one to which we have grown accustomed? As an alternative to big data, what would it mean to make ubiquitously collected data safely available to individuals as small data? How could we attain both security and privacy in the face of trends that seem to offer neither? And what role do public institutions, such as libraries, have in an environment that becomes more privatized by the day?

These are some of the questions addressed in the Winter 2016 issue of Daedalus on “The Internet.”  As guest editors David D. Clark (Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory) and Yochai Benkler (Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School and Faculty Co-Director of the Berkman Center for Internet and Society at Harvard University) have observed, the Internet “has become increasingly privately owned, commercial, productive, creative, and dangerous.”

Some of the themes explored in the issue include:

  • The conflicts that emerge among governments, corporate stakeholders, and Internet users through choices that are made in the design of the Internet
  • The challenges—including those of privacy and security—that materialize in the evolution from fixed terminals to ubiquitous computing
  • The role of public institutions in shaping the Internet’s privately owned open spaces
  • The ownership and security of data used for automatic control of connected devices, and
  • Consumer demand for “free” services—developed and supported through the sale of user data to advertisers….

Essays in the Winter 2016 issue of Daedalus include:

  • The Contingent Internet by David D. Clark (MIT)
  • Degrees of Freedom, Dimensions of Power by Yochai Benkler (Harvard Law School)
  • Edge Networks and Devices for the Internet of Things by Peter T. Kirstein (University College London)
  • Reassembling Our Digital Selves by Deborah Estrin (Cornell Tech and Weill Cornell Medical College) and Ari Juels (Cornell Tech)
  • Choices: Privacy and Surveillance in a Once and Future Internet by Susan Landau (Worcester Polytechnic Institute)
  • As Pirates Become CEOs: The Closing of the Open Internet by Zeynep Tufekci (University of North Carolina at Chapel Hill)
  • Design Choices for Libraries in the Digital-Plus Era by John Palfrey (Phillips Academy)…(More)

See also: Introduction

Developing Global Norms for Sharing Data and Results during Public Health Emergencies


Paper by Kayvon Modjarrad et al in PLOS Med: “…When a new or re-emergent pathogen causes a major outbreak, rapid access to both raw and analysed data or other pertinent research findings becomes critical to developing a rapid and effective public health response. Without the timely exchange of information on clinical, epidemiologic, and molecular features of an infectious disease, informed decisions about appropriate responses cannot be made, particularly those that relate to fielding new interventions or adapting existing ones. Failure to share information in a timely manner can have disastrous public health consequences, leading to unnecessary suffering and death. The 2014–2015 Ebola epidemic in West Africa revealed both successful practices and important deficiencies within existing mechanisms for information sharing. For example, trials of two Ebola vaccine candidates (ChAd3-ZEBOV and rVSV-ZEBOV) benefited greatly from an open collaboration between investigators and institutions in Africa, Europe, and North America . These teams, coordinated by the WHO, were able to generate and exchange critical data for the development of urgently needed, novel vaccines along faster timelines than have ever before been achieved. Similarly, some members of the genome sequencing community made viral sequence data publicly available within days of accessing samples , thus adhering to their profession’s long-established principles of rapid, public release of sequence data in any setting. In contrast, the dissemination of surveillance data early in the epidemic was comparatively slow, and in some cases, the criteria for sharing were unclear.

In recognition of the need to streamline mechanisms of data dissemination—globally and in as close to real-time as possible—the WHO held a consultation in Geneva, Switzerland, on 1–2 September 2015 to advance the development of data sharing norms, specifically in the context of public health emergencies….

preservation of global health requires prioritization of and support for international collaboration. These and other principles were affirmed at the consultation (Table 1) and codified into a consensus statement that was published on the WHO website immediately following the meeting (http://www.who.int/medicines/ebola-treatment/data-sharing_phe/en/). A more comprehensive set of principles and action items was made available in November 2015, including the consensus statement made by the editorial staff of journals that attended the meeting (http://www.who.int/medicines/ebola-treatment/blueprint_phe_data-share-results/en/). The success of prior initiatives to accelerate timelines for reporting clinical trial results has helped build momentum for a broader data sharing agenda. As the quick and transparent dissemination of information is the bedrock of good science and public health practice, it is important that the current trends in data sharing carry over to all matters of acute public health need. Such a global norm would advance the spirit of open collaboration, simplify current mechanisms of information sharing, and potentially save many lives in subsequent outbreaks….(More)”

 

The Power of the Nudge to Change Our Energy Future


Sebastian Berger in the Scientific American: “More than ever, psychology has become influential not only in explaining human behavior, but also as a resource for policy makers to achieve goals related to health, well-being, or sustainability. For example, President Obama signed an executive order directing the government to systematically use behavioral science insights to “better serve the American people.” Not alone in this endeavor, many governments – including the UK, Germany, Denmark, or Australia – are turning to the insights that most frequently stem from psychological researchers, but also include insights from behavioral economics, sociology, or anthropology.

Particularly relevant are the analysis and the setting of “default-options.” A default is the option that a decision maker receives if he or she does not specifically state otherwise. Are we automatically enrolled in a 401(k), are we organ donors by default, or is the flu-shot a standard that is routinely given to all citizens? Research has given us many examples of how and when defaults can promote public safety or wealth.

One of the most important questions facing the planet, however, is how to manage the transition into a carbon-free economy. In a recent paper, Felix Ebeling of the University of Cologne and I tested whether defaults could nudge consumers into choosing a green energy contract over one that relies on conventional energy. The results were striking: setting the default to green energy increased participation nearly tenfold. This is an important result because it tells us that subtle, non-coercive changes in the decision making environment are enough to show substantial differences in consumers’ preferences in the domain of clean energy. It changes green energy participation from “hardly anyone” to “almost everyone”. Merely within the domain of energy behavior, one can think of many applications where this finding can be applied:  For instance, default engines of new cars could be set to hybrid and customers would need to actively switch to standard options. Standard temperatures of washing machines could be low, etc….(More)”

This Is How Visualizing Open Data Can Help Save Lives


Alexander Howard at the Huffington Post: “Cities are increasingly releasing data that they can use to make life better for their residents online — enabling journalists and researchers to better inform the public.

Los Angeles, for example, has analyzed data about injuries and deaths on its streets and published it online. Now people can check its conclusions and understand why LA’s public department prioritizes certain intersections.

The impact from these kinds of investments can lead directly to saving lives and preventing injuries. The work is part of a broader effort around the world to make cities safer.

Like New York City, San Francisco and Portland, Oregon, Los Angeles has adopted Sweden’s “Vision Zero” program as part of its strategy for eliminating traffic deathsCalifornia led the nation in bicycle deaths in 2014.

At visionzero.lacity.org, you can see that the City of Los Angeles is using data visualization to identify the locations of “high injury networks,” or the 6 percent of intersections that account for 65 percent of the severe injuries in the area.

CITY OF LOS ANGELES

The work is the result of LA’s partnership with University of South California graduate students. As a result of these analyses, the Los Angeles Police Department has been cracking down on jaywalking near the University of Southern California.

Abhi Nemani, the former chief data officer for LA, explained why the city needed to “go back to school” for help.

“In resource-constrained environments — the environment most cities find themselves in these days — you often have to beg, borrow, and steal innovation; particularly so, when it comes to in-demand resources such as data science expertise,” he told the Huffington Post.

“That’s why in Los Angeles, we opted to lean on the community for support: both the growing local tech sector and the expansive academic base. The academic community, in particular, was eager to collaborate with the city. In fact, most — if not all — local institutions reached out to me at some point asking to partner on a data science project with their graduate students.”

The City of Los Angeles is now working with another member of its tech sector toeliminate traffic deaths. DataScience, based in Culver City, California, received $22 million dollars in funding in December to make predictive insights for customers.

“The City of Los Angeles is very data-driven,” DataScience CEO Ian Swanson told HuffPost. “I commend Mayor Eric Garcetti and the City of Los Angeles on the openness, transparency, and availability of city data initiatives, like Vision Zero, put the City of Los Angeles‘ data into action and improve life in this great city.”

DataScience created an interactive online map showing the locations of collisions involving bicycles across the city….(More)”