How Singapore sends daily Whatsapp updates on coronavirus


Medha Basu at GovInsider: “How do you communicate with citizens as a pandemic stirs fear and spreads false news? Singapore has trialled WhatsApp to give daily updates on the Covid-19 virus.

The World Health Organisation’s chief praised Singapore’s reaction to the outbreak. “We are very impressed with the efforts they are making to find every case, follow up with contacts, and stop transmission,” Tedros Adhanom Ghebreyesus said.

Since late January, the government has been providing two to three daily updates on cases via the messaging app. “Fake news is typically propagated through Whatsapp, so messaging with the same interface can help stem this flow,” Sarah Espaldon, Operations Marketing Manager from Singapore’s Open Government Products unit told GovInsider….

The niche system became newly vital as Covid-19 arrived, with fake news and fear following quickly in a nation that still remembers the fatal SARS outbreak of 2003. The tech had to be upgraded to ensure it could cope with new demand, and get information out rapidly before misinformation could sow discord.

The Open Government Products team used three tools to adapt Whatsapp and create a rapid information sharing system.

1. AI Translation

Singapore has four official languages – Chinese, English, Malay and Tamil. Government used an AI tool to rapidly translate the material from English, so that every community receives the information as quickly as possible.

An algorithm produces the initial draft of the translation, which is then vetted by civil servants before being sent out on WhatsApp. The AI was trained using text from local government communications so is able to translate references and names of Singapore government schemes. This project was built by the Ministry of Communication and Information and Agency for Science, Technology and Research in collaboration with GovTech.

2. Make it easy to sign up

People specify their desired language through an easy sign up form. Singapore used Form.Sg, a tool that allows officials to launch a new mailing list in 30 minutes and connect to other government systems. A government-built form ensures that data is end-to-end encrypted and connected to the government cloud.

3. Fast updates

The updates were initially too slow in reaching people. It took four hours to add new subscribers to the recipient list and the system could send only 10 messages per second. “With 500,000 subscribers, it would take almost 14 hours for the last person to get the message,” Espaldon says….(More)”.

Decide Madrid: A Critical Analysis of an Award-Winning e-Participation Initiative


Paper by Sonia Royo, Vicente Pina and Jaime Garcia-Rayado: “This paper analyzes the award-winning e-participation initiative of the city council of Madrid, Decide Madrid, to identify the critical success factors and the main barriers that are conditioning its performance. An exploratory case study is used as a research technique, including desk research and semi-structured interviews. The analysis distinguishes contextual, organizational and individual level factors; it considers whether the factors or barriers are more related to the information and communication technology (ICT) component, public sector context or democratic participation; it also differentiates among the different stages of the development of the initiative. Results show that individual and organizational factors related to the public sector context and democratic participation are the most relevant success factors.

The high expectations of citizens explain the high levels of participation in the initial stages of Decide Madrid. However, the lack of transparency and poor functioning of some of its participatory activities (organizational factors related to the ICT and democratic dimensions) are negatively affecting its performance. The software created for this platform, Consul, has been adopted or it is in the process of being implemented in more than 100 institutions in 33 countries. Therefore, the findings of this research can potentially be useful to improve the performance and sustainability of e-participation platforms worldwide…(More)”.

Frameworks for Collective Intelligence: A Systematic Literature Review


Paper by Shweta Suran, Vishwajeet Pattanaik, and Dirk Draheim: “Over the last few years, Collective Intelligence (CI) platforms have become a vital resource for learning, problem solving, decision-making, and predictions. This rising interest in the topic has to led to the development of several models and frameworks available in published literature.

Unfortunately, most of these models are built around domain-specific requirements, i.e., they are often based on the intuitions of their domain experts and developers. This has created a gap in our knowledge in the theoretical foundations of CI systems and models, in general. In this article, we attempt to fill this gap by conducting a systematic review of CI models and frameworks, identified from a collection of 9,418 scholarly articles published since 2000. Eventually, we contribute by aggregating the available knowledge from 12 CI models into one novel framework and present a generic model that describes CI systems irrespective of their domains. We add to the previously available CI models by providing a more granular view of how different components of CI systems interact. We evaluate the proposed model by examining it with respect to six popular, ongoing CI initiatives available on the Web….(More)”.

Imagining Regulation Differently: Co-creating for Engagement


Book edited by Morag McDermont, Tim Cole, Janet Newman and Angela Piccini: “There is an urgent need to rethink relationships between systems of government and those who are ‘governed’. This book explores ways of rethinking those relationships by bringing communities normally excluded from decision-making to centre stage to experiment with new methods of regulating for engagement.

Using original, co-produced research, it innovatively shows how we can better use a ‘bottom-up’ approach to design regulatory regimes that recognise the capabilities of communities at the margins and powerfully support the knowledge, passions and creativity of citizens. The authors provide essential guidance for all those working on co-produced research to make impactful change…(More)”.

Crowdsourcing data to mitigate epidemics


Gabriel M Leung and Kathy Leung at The Lancet: “Coronavirus disease 2019 (COVID-19) has spread with unprecedented speed and scale since the first zoonotic event that introduced the causative virus—severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)—into humans, probably during November, 2019, according to phylogenetic analyses suggesting the most recent common ancestor of the sequenced genomes emerged between Oct 23, and Dec 16, 2019. The reported cumulative number of confirmed patients worldwide already exceeds 70 000 in almost 30 countries and territories as of Feb 19, 2020, although that the actual number of infections is likely to far outnumber this case count.

During any novel emerging epidemic, let alone one with such magnitude and speed of global spread, a first task is to put together a line list of suspected, probable, and confirmed individuals on the basis of working criteria of the respective case definitions. This line list would allow for quick preliminary assessment of epidemic growth and potential for spread, evidence-based determination of the period of quarantine and isolation, and monitoring of efficiency of detection of potential cases. Frequent refreshing of the line list would further enable real-time updates as more clinical, epidemiological, and virological (including genetic) knowledge become available as the outbreak progresses….

We surveyed different and varied sources of possible line lists for COVID-19 (appendix pp 1–4). A bottleneck remains in carefully collating as much relevant data as possible, sifting through and verifying these data, extracting intelligence to forecast and inform outbreak strategies, and thereafter repeating this process in iterative cycles to monitor and evaluate progress. A possible methodological breakthrough would be to develop and validate algorithms for automated bots to search through cyberspaces of all sorts, by text mining and natural language processing (in languages not limited to English) to expedite these processes.In this era of smartphone and their accompanying applications, the authorities are required to combat not only the epidemic per se, but perhaps an even more sinister outbreak of fake news and false rumours, a so-called infodemic…(More)”.

Mapping Wikipedia


Michael Mandiberg at The Atlantic: “Wikipedia matters. In a time of extreme political polarization, algorithmically enforced filter bubbles, and fact patterns dismissed as fake news, Wikipedia has become one of the few places where we can meet to write a shared reality. We treat it like a utility, and the U.S. and U.K. trust it about as much as the news.

But we know very little about who is writing the world’s encyclopedia. We do know that just because anyone can edit, doesn’t mean that everyone does: The site’s editors are disproportionately cis white men from the global North. We also know that, as with most of the internet, a small number of the editors do a large amount of the editing. But that’s basically it: In the interest of improving retention, the Wikimedia Foundation’s own research focuses on the motivations of people who do edit, not on those who don’t. The media, meanwhile, frequently focus on Wikipedia’s personality stories, even when covering the bigger questions. And Wikipedia’s own culture pushes back against granular data harvesting: The Wikimedia Foundation’s strong data-privacy rules guarantee users’ anonymity and limit the modes and duration of their own use of editor data.

But as part of my research in producing Print Wikipedia, I discovered a data set that can offer an entry point into the geography of Wikipedia’s contributors. Every time anyone edits Wikipedia, the software records the text added or removed, the time of the edit, and the username of the editor. (This edit history is part of Wikipedia’s ethos of radical transparency: Everyone is anonymous, and you can see what everyone is doing.) When an editor isn’t logged in with a username, the software records that user’s IP address. I parsed all of the 884 million edits to English Wikipedia to collect and geolocate the 43 million IP addresses that have edited English Wikipedia. I also counted 8.6 million username editors who have made at least one edit to an article.

The result is a set of maps that offer, for the first time, insight into where the millions of volunteer editors who build and maintain English Wikipedia’s 5 million pages are—and, maybe more important, where they aren’t….

Like the Enlightenment itself, the modern encyclopedia has a history entwined with colonialism. Encyclopédie aimed to collect and disseminate all the world’s knowledge—but in the end, it could not escape the biases of its colonial context. Likewise, Napoleon’s Description de l’Égypte augmented an imperial military campaign with a purportedly objective study of the nation, which was itself an additional form of conquest. If Wikipedia wants to break from the past and truly live up to its goal to compile the sum of all human knowledge, it requires the whole world’s participation….(More)”.

Wisdom or Madness? Comparing Crowds with Expert Evaluation in Funding the Arts


Paper by Ethan R. Mollick and Ramana Nanda: “In fields as diverse as technology entrepreneurship and the arts, crowds of interested stakeholders are increasingly responsible for deciding which innovations to fund, a privilege that was previously reserved for a few experts, such as venture capitalists and grant‐making bodies. Little is known about the degree to which the crowd differs from experts in judging which ideas to fund, and, indeed, whether the crowd is even rational in making funding decisions. Drawing on a panel of national experts and comprehensive data from the largest crowdfunding site, we examine funding decisions for proposed theater projects, a category where expert and crowd preferences might be expected to differ greatly.

We instead find significant agreement between the funding decisions of crowds and experts. Where crowds and experts disagree, it is far more likely to be a case where the crowd is willing to fund projects that experts may not. Examining the outcomes of these projects, we find no quantitative or qualitative differences between projects funded by the crowd alone, and those that were selected by both the crowd and experts. Our findings suggest that crowdfunding can play an important role in complementing expert decisions, particularly in sectors where the crowds are end users, by allowing projects the option to receive multiple evaluations and thereby lowering the incidence of “false negatives.”…(More)”.

Identifying Urban Areas by Combining Human Judgment and Machine Learning: An Application to India


Paper by Virgilio Galdo, Yue Li and Martin Rama: “This paper proposes a methodology for identifying urban areas that combines subjective assessments with machine learning, and applies it to India, a country where several studies see the official urbanization rate as an under-estimate. For a representative sample of cities, towns and villages, as administratively defined, human judgment of Google images is used to determine whether they are urban or rural in practice. Judgments are collected across four groups of assessors, differing in their familiarity with India and with urban issues, following two different protocols. The judgment-based classification is then combined with data from the population census and from satellite imagery to predict the urban status of the sample.

The Logit model, and LASSO and random forests methods, are applied. These approaches are then used to decide whether each of the out-of-sample administrative units in India is urban or rural in practice. The analysis does not find that India is substantially more urban than officially claimed. However, there are important differences at more disaggregated levels, with ?other towns? and ?census towns? being more rural, and some southern states more urban, than is officially claimed. The consistency of human judgment across assessors and protocols, the easy availability of crowd-sourcing, and the stability of predictions across approaches, suggest that the proposed methodology is a promising avenue for studying urban issues….(More)”.

Collaborative е-Rulemaking, Democratic Bots, and the Future of Digital Democracy


Paper by Oren Perez: “… focuses on “deliberative e-rulemaking”: digital consultation processes that seek to facilitate public deliberation over policy or regulatory proposals. The main challenge of е-rulemaking platforms is to support an “intelligent” deliberative process that enables decision makers to identify a wide range of options, weigh the relevant considerations, and develop epistemically responsible solutions. This article discusses and critiques two approaches to this challenge: The Cornell Regulation Room project and model of computationally assisted regulatory participation by Livermore et al. It then proceeds to explore two alternative approaches to e-rulemaking: One is based on the implementation of collaborative, wiki-styled tools. This article discusses the findings of an experiment, which was conducted at Bar-Ilan University and explored various aspects of a wiki-based collaborative е-rulemaking system. The second approach follows a more futuristic approach, focusing on the potential development of autonomous, artificial democratic agents. This article critically discusses this alternative, also in view of the recent debate regarding the idea of “augmented democracy.”…(More)”.

The Future of Minds and Machines


Report by Aleksandra Berditchevskaia and Peter Baek: “When it comes to artificial intelligence (AI), the dominant media narratives often end up taking one of two opposing stances: AI is the saviour or the villain. Whether it is presented as the technology responsible for killer robots and mass job displacement or the one curing all disease and halting the climate crisis, it seems clear that AI will be a defining feature of our future society. However, these visions leave little room for nuance and informed public debate. They also help propel the typical trajectory followed by emerging technologies; with inevitable regularity we observe the ascent of new technologies to the peak of inflated expectations they will not be able to fulfil, before dooming them to a period languishing in the trough of disillusionment.[1]

There is an alternative vision for the future of AI development. By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way. Clearly defining the problems we want to address and focusing on solutions that result in the most collective benefit can lead us towards a better relationship between machine and human intelligence. By considering AI in the context of large-scale participatory projects across areas such as citizen science, crowdsourcing and participatory digital democracy, we can both amplify what it is possible to achieve through collective effort and shape the future trajectory of machine intelligence. We call this 21st-century collective intelligence (CI).

In The Future of Minds and Machines we introduce an emerging framework for thinking about how groups of people interface with AI and map out the different ways that AI can add value to collective human intelligence and vice versa. The framework has, in large part, been developed through analysis of inspiring projects and organisations that are testing out opportunities for combining AI & CI in areas ranging from farming to monitoring human rights violations. Bringing together these two fields is not easy. The design tensions identified through our research highlight the challenges of navigating this opportunity and selecting the criteria that public sector decision-makers should consider in order to make the most of solving problems with both minds and machines….(More)”.