Coronavirus: seven ways collective intelligence is tackling the pandemic


Article by Kathy Peach: “Tackling the emergence of a new global pandemic is a complex task. But collective intelligence is now being used around the world by communities and governments to respond.

At its simplest, collective intelligence is the enhanced capacity created when distributed groups of people work together, often with the help of technology, to mobilise more information, ideas and insights to solve a problem.

Advances in digital technologies have transformed what can be achieved through collective intelligence in recent years – connecting more of us, augmenting human intelligence with machine intelligence, and helping us to generate new insights from novel sources of data. It is particularly suited to addressing fast-evolving, complex global problems such as disease outbreaks.

Here are seven ways it is tackling the coronavirus pandemic:

1. Predicting and modelling outbreaks

On the December 31, 2019, health monitoring platform Blue Dot alerted its clients to the outbreak of a flu-like virus in Wuhan, China – nine days before the World Health Organization (WHO) released a statement about it. It then correctly predicted that the virus would jump from Wuhan to Bangkok, Seoul, Taipei and Tokyo.

Blue Dot combines existing data sets to create new insights. Natural language processing, the AI methods that understand and translate human-generated text, and machine learning techniques that learn from large volumes of data, sift through reports of disease outbreaks in animals, news reports in 65 languages, and airline passenger information. It supplements the machine-generated model with human intelligence, drawing on diverse expertise from epidemiologists to veterinarians and ecologists to ensure that its conclusions are valid.

2. Citizen science

The BBC carried out a citizen science project in 2018, which involved members of the public in generating new scientific data about how infections spread. People downloaded an app that monitored their GPS position every hour, and asked them to report who they had encountered or had contact with that day….(More).

Frameworks for Collective Intelligence: A Systematic Literature Review


Paper by Shweta Suran, Vishwajeet Pattanaik, and Dirk Draheim: “Over the last few years, Collective Intelligence (CI) platforms have become a vital resource for learning, problem solving, decision-making, and predictions. This rising interest in the topic has to led to the development of several models and frameworks available in published literature.

Unfortunately, most of these models are built around domain-specific requirements, i.e., they are often based on the intuitions of their domain experts and developers. This has created a gap in our knowledge in the theoretical foundations of CI systems and models, in general. In this article, we attempt to fill this gap by conducting a systematic review of CI models and frameworks, identified from a collection of 9,418 scholarly articles published since 2000. Eventually, we contribute by aggregating the available knowledge from 12 CI models into one novel framework and present a generic model that describes CI systems irrespective of their domains. We add to the previously available CI models by providing a more granular view of how different components of CI systems interact. We evaluate the proposed model by examining it with respect to six popular, ongoing CI initiatives available on the Web….(More)”.

Mapping Wikipedia


Michael Mandiberg at The Atlantic: “Wikipedia matters. In a time of extreme political polarization, algorithmically enforced filter bubbles, and fact patterns dismissed as fake news, Wikipedia has become one of the few places where we can meet to write a shared reality. We treat it like a utility, and the U.S. and U.K. trust it about as much as the news.

But we know very little about who is writing the world’s encyclopedia. We do know that just because anyone can edit, doesn’t mean that everyone does: The site’s editors are disproportionately cis white men from the global North. We also know that, as with most of the internet, a small number of the editors do a large amount of the editing. But that’s basically it: In the interest of improving retention, the Wikimedia Foundation’s own research focuses on the motivations of people who do edit, not on those who don’t. The media, meanwhile, frequently focus on Wikipedia’s personality stories, even when covering the bigger questions. And Wikipedia’s own culture pushes back against granular data harvesting: The Wikimedia Foundation’s strong data-privacy rules guarantee users’ anonymity and limit the modes and duration of their own use of editor data.

But as part of my research in producing Print Wikipedia, I discovered a data set that can offer an entry point into the geography of Wikipedia’s contributors. Every time anyone edits Wikipedia, the software records the text added or removed, the time of the edit, and the username of the editor. (This edit history is part of Wikipedia’s ethos of radical transparency: Everyone is anonymous, and you can see what everyone is doing.) When an editor isn’t logged in with a username, the software records that user’s IP address. I parsed all of the 884 million edits to English Wikipedia to collect and geolocate the 43 million IP addresses that have edited English Wikipedia. I also counted 8.6 million username editors who have made at least one edit to an article.

The result is a set of maps that offer, for the first time, insight into where the millions of volunteer editors who build and maintain English Wikipedia’s 5 million pages are—and, maybe more important, where they aren’t….

Like the Enlightenment itself, the modern encyclopedia has a history entwined with colonialism. Encyclopédie aimed to collect and disseminate all the world’s knowledge—but in the end, it could not escape the biases of its colonial context. Likewise, Napoleon’s Description de l’Égypte augmented an imperial military campaign with a purportedly objective study of the nation, which was itself an additional form of conquest. If Wikipedia wants to break from the past and truly live up to its goal to compile the sum of all human knowledge, it requires the whole world’s participation….(More)”.

Wisdom or Madness? Comparing Crowds with Expert Evaluation in Funding the Arts


Paper by Ethan R. Mollick and Ramana Nanda: “In fields as diverse as technology entrepreneurship and the arts, crowds of interested stakeholders are increasingly responsible for deciding which innovations to fund, a privilege that was previously reserved for a few experts, such as venture capitalists and grant‐making bodies. Little is known about the degree to which the crowd differs from experts in judging which ideas to fund, and, indeed, whether the crowd is even rational in making funding decisions. Drawing on a panel of national experts and comprehensive data from the largest crowdfunding site, we examine funding decisions for proposed theater projects, a category where expert and crowd preferences might be expected to differ greatly.

We instead find significant agreement between the funding decisions of crowds and experts. Where crowds and experts disagree, it is far more likely to be a case where the crowd is willing to fund projects that experts may not. Examining the outcomes of these projects, we find no quantitative or qualitative differences between projects funded by the crowd alone, and those that were selected by both the crowd and experts. Our findings suggest that crowdfunding can play an important role in complementing expert decisions, particularly in sectors where the crowds are end users, by allowing projects the option to receive multiple evaluations and thereby lowering the incidence of “false negatives.”…(More)”.

Identifying Urban Areas by Combining Human Judgment and Machine Learning: An Application to India


Paper by Virgilio Galdo, Yue Li and Martin Rama: “This paper proposes a methodology for identifying urban areas that combines subjective assessments with machine learning, and applies it to India, a country where several studies see the official urbanization rate as an under-estimate. For a representative sample of cities, towns and villages, as administratively defined, human judgment of Google images is used to determine whether they are urban or rural in practice. Judgments are collected across four groups of assessors, differing in their familiarity with India and with urban issues, following two different protocols. The judgment-based classification is then combined with data from the population census and from satellite imagery to predict the urban status of the sample.

The Logit model, and LASSO and random forests methods, are applied. These approaches are then used to decide whether each of the out-of-sample administrative units in India is urban or rural in practice. The analysis does not find that India is substantially more urban than officially claimed. However, there are important differences at more disaggregated levels, with ?other towns? and ?census towns? being more rural, and some southern states more urban, than is officially claimed. The consistency of human judgment across assessors and protocols, the easy availability of crowd-sourcing, and the stability of predictions across approaches, suggest that the proposed methodology is a promising avenue for studying urban issues….(More)”.

The Future of Minds and Machines


Report by Aleksandra Berditchevskaia and Peter Baek: “When it comes to artificial intelligence (AI), the dominant media narratives often end up taking one of two opposing stances: AI is the saviour or the villain. Whether it is presented as the technology responsible for killer robots and mass job displacement or the one curing all disease and halting the climate crisis, it seems clear that AI will be a defining feature of our future society. However, these visions leave little room for nuance and informed public debate. They also help propel the typical trajectory followed by emerging technologies; with inevitable regularity we observe the ascent of new technologies to the peak of inflated expectations they will not be able to fulfil, before dooming them to a period languishing in the trough of disillusionment.[1]

There is an alternative vision for the future of AI development. By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way. Clearly defining the problems we want to address and focusing on solutions that result in the most collective benefit can lead us towards a better relationship between machine and human intelligence. By considering AI in the context of large-scale participatory projects across areas such as citizen science, crowdsourcing and participatory digital democracy, we can both amplify what it is possible to achieve through collective effort and shape the future trajectory of machine intelligence. We call this 21st-century collective intelligence (CI).

In The Future of Minds and Machines we introduce an emerging framework for thinking about how groups of people interface with AI and map out the different ways that AI can add value to collective human intelligence and vice versa. The framework has, in large part, been developed through analysis of inspiring projects and organisations that are testing out opportunities for combining AI & CI in areas ranging from farming to monitoring human rights violations. Bringing together these two fields is not easy. The design tensions identified through our research highlight the challenges of navigating this opportunity and selecting the criteria that public sector decision-makers should consider in order to make the most of solving problems with both minds and machines….(More)”.

Wisdom of stakeholder crowds in complex social–ecological systems


Paper by Payam Aminpour et al: “Sustainable management of natural resources requires adequate scientific knowledge about complex relationships between human and natural systems. Such understanding is difficult to achieve in many contexts due to data scarcity and knowledge limitations.

We explore the potential of harnessing the collective intelligence of resource stakeholders to overcome this challenge. Using a fisheries example, we show that by aggregating the system knowledge held by stakeholders through graphical mental models, a crowd of diverse resource users produces a system model of social–ecological relationships that is comparable to the best scientific understanding.

We show that the averaged model from a crowd of diverse resource users outperforms those of more homogeneous groups. Importantly, however, we find that the averaged model from a larger sample of individuals can perform worse than one constructed from a smaller sample. However, when averaging mental models within stakeholder-specific subgroups and subsequently aggregating across subgroup models, the effect is reversed. Our work identifies an inexpensive, yet robust way to develop scientific understanding of complex social–ecological systems by leveraging the collective wisdom of non-scientist stakeholders…(More)”.

Incentive Competitions and the Challenge of Space Exploration


Article by Matthew S. Williams: “Bill Joy, the famed computer engineer who co-founded Sun Microsystems in 1982, once said, “No matter who you are, most of the smartest people work for someone else.” This has come to be known as “Joy’s Law” and is one of the inspirations for concepts such as “crowdsourcing”.

Increasingly, government agencies, research institutions, and private companies are looking to the power of the crowd to find solutions to problems. Challenges are created and prizes offered – that, in basic terms, is an “incentive competition.”

The basic idea of an incentive competition is pretty straightforward. When confronted with a particularly daunting problem, you appeal to the general public to provide possible solutions and offer a reward for the best one. Sounds simple, doesn’t it?

But in fact, this concept flies in the face of conventional problem-solving, which is for companies to recruit people with knowledge and expertise and solve all problems in-house. This kind of thinking underlies most of our government and business models, but has some significant limitations….

Another benefit to crowdsourcing is the way it takes advantage of the exponential growth in human population in the past few centuries. Between 1650 and 1800, the global population doubled, to reach about 1 billion. It took another one-hundred and twenty years (1927) before it doubled again to reach 2 billion.

However, it only took fifty-seven years for the population to double again and reach 4 billion (1974), and just fifteen more for it to reach 6 billion. As of 2020, the global population has reached 7.8 billion, and the growth trend is expected to continue for some time.

This growth has paralleled another trend, the rapid development of new ideas in science and technology. Between 1650 and 2020, humanity has experienced multiple technological revolutions, in what is a comparatively very short space of time….(More)”.

The wisdom of crowds: What smart cities can learn from a dead ox and live fish


Portland State University: “In 1906, Francis Galton was at a country fair where attendees had the opportunity to guess the weight of a dead ox. Galton took the guesses of 787 fair-goers and found that the average guess was only one pound off of the correct weight — even when individual guesses were off base.

This concept, known as “the wisdom of crowds” or “collective intelligence,” has been applied to many situations over the past century, from people estimating the number of jellybeans in a jar to predicting the winners of major sporting events — often with high rates of success. Whatever the problem, the average answer of the crowd seems to be an accurate solution.

But does this also apply to knowledge about systems, such as ecosystems, health care, or cities? Do we always need in-depth scientific inquiries to describe and manage them — or could we leverage crowds?

This question has fascinated Antonie J. Jetter, associate professor of Engineering and Technology Management for many years. Now, there’s an answer. A recent study, which was co-authored by Jetter and published in Nature Sustainability, shows that diverse crowds of local natural resource stakeholders can collectively produce complex environmental models very similar to those of trained experts.

For this study, about 250 anglers, water guards and board members of German fishing clubs were asked to draw connections showing how ecological relationships influence the pike stock from the perspective of the anglers and how factors like nutrients and fishing pressures help determine the number of pike in a freshwater lake ecosystem. The individuals’ drawings — or their so-called mental models — were then mathematically combined into a collective model representing their averaged understanding of the ecosystem and compared with the best scientific knowledge on the same subject.

The result is astonishing. If you combine the ideas from many individual anglers by averaging their mental models, the final outcomes correspond more or less exactly to the scientific knowledge of pike ecology — local knowledge of stakeholders produces results that are in no way inferior to lengthy and expensive scientific studies….(More)”.

Collective Intelligence in City Design


Idea by Helena Rong and Juncheng Yang: “We propose an interactive design engagement platform which facilitates a continuous conversation between developers, designers and end users from pre-design and planning phases all the way to post-occupancy, adopting a citizen-centric and inclusive-oriented approach which would stimulate trust-building and invite active participation from end users from different age, ethnicity, social and economic background to participate in the design and development process. We aim to explore how collective intelligence through citizen engagement could be enabled by data to allow new collectives to emerge, confronting design as an iterative process involving scalable cooperation of different actors. As a result, design is a collaborative and conscious practice not born out of a single mastermind of the architect. Rather, its agency is reinforced by a cooperative ideal involving institutions, enterprises and single individuals alike enabled by data science….(More)”