Greece used AI to curb COVID: what other nations can learn

Editorial at Nature: “A few months into the COVID-19 pandemic, operations researcher Kimon Drakopoulos e-mailed both the Greek prime minister and the head of the country’s COVID-19 scientific task force to ask if they needed any extra advice.

Drakopoulos works in data science at the University of Southern California in Los Angeles, and is originally from Greece. To his surprise, he received a reply from Prime Minister Kyriakos Mitsotakis within hours. The European Union was asking member states, many of which had implemented widespread lockdowns in March, to allow non-essential travel to recommence from July 2020, and the Greek government needed help in deciding when and how to reopen borders.

Greece, like many other countries, lacked the capacity to test all travellers, particularly those not displaying symptoms. One option was to test a sample of visitors, but Greece opted to trial an approach rooted in artificial intelligence (AI).

Between August and November 2020 — with input from Drakopoulos and his colleagues — the authorities launched a system that uses a machine-learning algorithm to determine which travellers entering the country should be tested for COVID-19. The authors found machine learning to be more effective at identifying asymptomatic people than was random testing or testing based on a traveller’s country of origin. According to the researchers’ analysis, during the peak tourist season, the system detected two to four times more infected travellers than did random testing.

The machine-learning system, which is among the first of its kind, is called Eva and is described in Nature this week (H. Bastani et al. Nature; 2021). It’s an example of how data analysis can contribute to effective COVID-19 policies. But it also presents challenges, from ensuring that individuals’ privacy is protected to the need to independently verify its accuracy. Moreover, Eva is a reminder of why proposals for a pandemic treaty (see Nature 594, 8; 2021) must consider rules and protocols on the proper use of AI and big data. These need to be drawn up in advance so that such analyses can be used quickly and safely in an emergency.

In many countries, travellers are chosen for COVID-19 testing at random or according to risk categories. For example, a person coming from a region with a high rate of infections might be prioritized for testing over someone travelling from a region with a lower rate.

By contrast, Eva collected not only travel history, but also demographic data such as age and sex from the passenger information forms required for entry to Greece. It then matched those characteristics with data from previously tested passengers and used the results to estimate an individual’s risk of infection. COVID-19 tests were targeted to travellers calculated to be at highest risk. The algorithm also issued tests to allow it to fill data gaps, ensuring that it remained up to date as the situation unfolded.

During the pandemic, there has been no shortage of ideas on how to deploy big data and AI to improve public health or assess the pandemic’s economic impact. However, relatively few of these ideas have made it into practice. This is partly because companies and governments that hold relevant data — such as mobile-phone records or details of financial transactions — need agreed systems to be in place before they can share the data with researchers. It’s also not clear how consent can be obtained to use such personal data, or how to ensure that these data are stored safely and securely…(More)”.

Psychology and Behavioral Economics: Applications for Public Policy

Book edited by Kai Ruggeri: “…offers an expert introduction to how psychology can be applied to a range of public policy areas. It examines the impact of psychological research for public policymaking in economic, financial, and consumer sectors; in education, healthcare, and the workplace; for energy and the environment; and in communications.

Your energy bills show you how much you use compared to the average household in your area. Your doctor sends you a text message reminder when your appointment is coming up. Your bank gives you three choices for how much to pay off on your credit card each month. Wherever you look, there has been a rapid increase in the importance we place on understanding real human behaviors in everyday decisions, and these behavioral insights are now regularly used to influence everything from how companies recruit employees through to large-scale public policy and government regulation. But what is the actual evidence behind these tactics, and how did psychology become such a major player in economics? Answering these questions and more, this team of authors, working across both academia and government, present this fully revised and updated reworking of Behavioral Insights for Public Policy.

This updatecovers everything from how policy was historically developed, to major research in human behavior and social psychology, to key moments that brought behavioral sciences to the forefront of public policy. Featuring over 100 empirical examples of how behavioral insights are being used to address some of the most critical challenges faced globally, the book covers key topics such as evidence-based policy, a brief history of behavioral and decision sciences, behavioral economics, and policy evaluation, all illustrated throughout with lively case studies.

Including end-of-chapter questions, a glossary, and key concept boxes to aid retention, as well as a new chapter revealing the work of the Canadian government’s behavioral insights unit, this is the perfect textbook for students of psychology, economics, public health, education, and organizational sciences, as well as public policy professionals looking for fresh insight into the underlying theory and practical applications in a range of public policy areas….(More)”.

The Rise of the Pandemic Dashboard

Article by Marie Patino: “…All of these dashboards were launched very early in the pandemic,” said Damir Ivankovic, a PhD student at the University of Amsterdam. “Some of them were developed literally overnight, or over three sleepless nights in certain countries.” With Ph.D. researcher Erica Barbazza, Ivankovic has been leading a set of studies about Covid-19 dashboards with a network of researchers. For an upcoming paper that’s still unpublished, the pair have talked to more than 30 government dashboard teams across Europe and Asia to better understand their dynamics and the political decisions at stake in their creation. 

The dashboard craze can be traced back to Jan. 22, 2020, when graduate student Ensheng Dong, and Lauren Gardner, co-director of Johns Hopkins University’s Center for Systems Science and Engineering, launched the JHU interactive Covid dashboard. It would quickly achieve international fame, and screenshots of it started popping up in newspapers and on TV. The dashboard now racks up billions of daily hits. Soon after, cartography software company ESRI, through which the tool was made, spun off a variety of Covid resources and example dashboards, easy to customize and publish for those with a license. ESRI has provided about 5,000 organizations with a free license since the beginning of Covid.

That’s generated unprecedented traffic: The most-viewed public dashboards made using ESRI are all Covid-related, according to the company. The Johns Hopkins dash is number one. It made its data feed available for free, and now multiple other dashboards built by government and even news outlets, including Bloomberg, rely on Johns Hopkins to update their numbers. 

Public Health England’s dashboard is designed and hand-coded from scratch. But because of the pandemic’s urgency, many government agencies that lacked expertise in data analysis and visualization turned to off-the-shelf business analytics software to build their dashboards. Among those is ESRI, but also Tableau and Microsoft Power BI.

The pros? They provide ready-to-use templates and modules, don’t necessitate programming knowledge, are fast and easy to publish and provide users with a technical lifeline. The cons? They don’t enable design, can look clunky and cluttered, provide little wiggle room in terms of explaining the data and are rarely mobile-friendly. Also, many don’t provide multi-language support or accessibility features, and some don’t enable users to access the raw data that powers the tool. 

Dashboards everywhere
A compilation of government dashboards….(More)”.

Old Dog, New Tricks: Retraining and the Road to Government Reform

Essay by Beth Noveck: “…To be sure, one strategy for modernizing government is hiring new people with fresh skills in the fields of technology, data science, design, and marketing. Today, only 6 percent of the federal workforce is under 30 and, if age is any proxy for mastery of these in-demand new skills, then efforts by non-profits such as the Partnership for Public Service and the Tech Talent Project to attract a younger generation to work in the public sector are crucial. But we will not reinvent government fast enough through hiring alone.

The crucial and overlooked mechanism for improving government effectiveness is, therefore, to change how people work by training public servants across departments to use data and collective intelligence at each stage of the problem-solving process to foster more informed decision-making, more innovative solutions to problems, and more agile implementation of what works. All around the world we have witnessed how, when public servants work differently, government solves problems better.

Jonathan Wachtel, the lone city planner in Lakewood, Colorado, a suburb of Denver, has been able to undertake 500 sustainability projects because he knows how to collaborate and codesign with a network of 20,000 residents. When former Mayor of New Orleans Mitch Landrieu launched an initiative to start using data and resident engagement to address the city’s abysmal murder rate, that effort led to a 25 percent reduction in homicides in two years and a further decline to its lowest levels in 50 years by 2019. Because Samir Brahmachari, former Secretary, Department of Scientific and Industrial Research, of the government of India, turned to crowdsourcing and engaged the assistance of 7,900 contributors, he was able to identify six already-approved drugs that showed promised in the fight against tuberculosis….(More)”.

Where Is Everyone? The Importance of Population Density Data

Data Artefact Study by Aditi Ramesh, Stefaan Verhulst, Andrew Young and Andrew Zahuranec: “In this paper, we explore new and traditional approaches to measuring population density, and ways in which density information has frequently been used by humanitarian, private-sector and government actors to advance a range of private and public goals. We explain how new innovations are leading to fresh ways of collecting data—and fresh forms of data—and how this may open up new avenues for using density information in a variety of contexts. Section III examines one particular example: Facebook’s High-Resolution Population Density Maps (also referred to as HRSL, or high resolution settlement layer). This recent initiative, created in collaboration with a number of external organizations, shows not only the potential of mapping innovations but also the potential benefits of inter-sectoral partnerships and sharing. We examine three particular use cases of HRSL, and we follow with an assessment and some lessons learned. These lessons are applicable to HRSL in particular, but also more broadly. We conclude with some thoughts on avenues for future research….(More)”.

The search engine of 1896

The Generalist Academy: In 1896 Paul Otlet set up a bibliographic query service by mail: a 19th century search engine….The end of the 19th century was awash with the written word: books, monographs, and publications of all kinds. It was fiendishly difficult to find what you wanted in that mess. Bibliographies – compilations of references on a specific subject – were the maps to this vast informational territory. But they were expensive and time-consuming to compile.

Paul Otlet had a passion for information. More precisely, he had a passion for organising information. He and Henri La Fontaine made bibliographies on many subjects – and then turned their efforts towards creating something better. A master bibliography. A bibliography to rule them all, nothing less than a complete record of everything that had ever been published on every topic. This was their plan: the grandly named Universal Bibliographic Repertory.

This ambitious endeavour listed sources for every topic that its creators could imagine. The references were meticulously recorded on index cards that were filed in a massive series of drawers like the ones pictured above. The whole thing was arranged according to their Universal Decimal Classification, and it was enormous. In 1895 there were four hundred thousand entries. At its peak in 1934, there were nearly sixteen million.

How could you access such a mega-bibliography? Well, Otlet and La Fontaine set up a mail service. People set in queries and received a summary of publications relating to that topic. Curious about the native religions of Sumatra? Want to explore the 19th century decipherment of Akkadian cuneiform? Send a request to the Universal Bibliographic Repertory, get a tidy list of the references you need. It was nothing less than a manual search engine, one hundred and twenty-five years ago.

Encyclopedia Universalis
Paul Otlet, Public domain, via Wikimedia Commons

Otlet had many more ambitions: a world encyclopaedia of knowledge, contraptions to easily access every publication in the world (he was an early microfiche pioneer), and a whole city to serve as the bright centre of global intellect. These ambitions were mostly unrealised, due to lack of funds and the intervention of war. But today Otlet is recognised as an important figure in the history of information science…(More)”.

Gathering Strength, Gathering Storms

The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report: “In the five years since we released the first AI100 report, much has been written about the state of artificial intelligence and its influences on society. Nonetheless, AI100 remains unique in its combination of two key features. First, it is written by a Study Panel of core multi-disciplinary researchers in the field—experts who create artificial intelligence algorithms or study their influence on society as their main professional activity, and who have been doing so for many years. The authors are firmly rooted within the field of AI and provide an “insider’s” perspective. Second, it is a longitudinal study, with reports by such Study Panels planned once every five years, for at least one hundred years.

This report, the second in that planned series of studies, is being released five years after the first report.  Published on September 1, 2016, the first report was covered widely in the popular press and is known to have influenced discussions on governmental advisory boards and workshops in multiple countries. It has also been used in a variety of artificial intelligence curricula.   

In preparation for the second Study Panel, the Standing Committee commissioned two study-workshops held in 2019. These workshops were a response to feedback on the first AI100 report. Through them, the Standing Committee aimed to engage a broader, multidisciplinary community of scholars and stakeholders in its next study. The goal of the workshops was to draw on the expertise of computer scientists and engineers, scholars in the social sciences and humanities (including anthropologists, economists, historians, media scholars, philosophers, psychologists, and sociologists), law and public policy experts, and representatives from business management as well as the private and public sectors…(More)”.

Are citizen juries and assemblies on climate change driving democratic climate policymaking? An exploration of two case studies in the UK

Paper by Rebecca Wells, Candice Howarth & Lina I. Brand-Correa: “In light of increasing pressure to deliver climate action targets and the growing role of citizens in raising the importance of the issue, deliberative democratic processes (e.g. citizen juries and citizen assemblies) on climate change are increasingly being used to provide a voice to citizens in climate change decision-making. Through a comparative case study of two processes that ran in the UK in 2019 (the Leeds Climate Change Citizens’ Jury and the Oxford Citizens’ Assembly on Climate Change), this paper investigates how far citizen assemblies and juries are increasing citizen engagement on climate change and creating more citizen-centred climate policymaking. Interviews were conducted with policymakers, councillors, professional facilitators and others involved in running these processes to assess motivations for conducting these, their structure and the impact and influence they had. The findings suggest the impact of these processes is not uniform: they have an indirect impact on policy making by creating momentum around climate action and supporting the introduction of pre-planned or pre-existing policies rather than a direct impact by truly being citizen-centred policy making processes or conducive to new climate policy. We conclude with reflections on how these processes give elected representatives a public mandate on climate change, that they help to identify more nuanced and in-depth public opinions in a fair and informed way, yet it can be challenging to embed citizen juries and assemblies in wider democratic processes….(More)”.

Expertise, ‘Publics’ and the Construction of Government Policy

Introduction to Special Issue of Discover Society about the role of expertise and professional knowledge in democracy by John Holmwood: “In the UK, the vexed nature of the issue was, perhaps, best illustrated by (then Justice Secretary) Michael Gove’s comment during the Brexit campaign that he thought, “the people of this country have had enough of experts.” The comment is oft cited, and derided, especially in the context of the Covid-19 pandemic, where the public has, or so it is argued, found a new respect for a science that can guide public policy and deliver solutions.

Yet, Michael Gove’s point was more nuanced than is usually credited. It wasn’t scientific advice that he claimed people were fed up with, but “experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.” In other words, his complaint was about specific organised advocacy groups and their intervention in public debate and reporting in the media.

… the Government has consistently mobilised the claimed expert opinion of organisations in justification of their policies

Michael Gove’s extended comment was disingenuous. After all, the Brexit campaign, no less than the Remain campaign, drew upon arguments from think tanks and lobby groups. Moreover, since the referendum, the Government has consistently mobilised the claimed expert opinion of organisations in justification of their policies. Indeed, as Layla Aitlhadj and John Holmwood in this special issue argue, they have deliberately ‘managed’ civil society groups and supposedly independent reviews, such as that currently underway into the Prevent counter extremism policy.

In fact, there is nothing straightforward about the relationship between expertise and democracy as Stephen Turner (2003) has observed. The development of liberal democracy involves the rise of professional and expert knowledge which underpins the everyday governance of public institutions. At the same time, wider publics are asked to trust that knowledge even where it impinges directly upon their preferences; they are not in a position to evaluate it, except through the mediation of other experts. Elected politicians and governments, in turn, are dependent on expert knowledge to guide their policy choices, which are duly constrained by what is possible on the basis of technical judgements….(More)”

Designing geospatial data portals

Guidance by The Geospatial Commission: “…for developers and designers to increase the discoverability and usefulness of geospatial data through user-focused data portals….Data portals differ by the data they provide and the audiences they serve. ‘Data portals’ described within this guidance are web-based interfaces designed to help users find and access datasets. Optimally, they should be built around metadata records which describe datasets, provide pointers to where they can be located and explain any restrictions or limitations in their use.

Although more and more geospatial data is being made available online, there are users who are confused about where to go, who to trust and which datasets are most relevant to answering their questions.

In 2018 user researchers and designers across the Geo6 came together to explore the needs and frustrations experienced by users of data portals containing geospatial data.

Throughout 2019 and 2020 the Geo6 have worked on solutions to address pain points identified by the user research conducted for the Data Discoverability project. This guidance provides high-level general recommendations, however, exact requirements for any given portal will vary depending on the needs of your target audience and according to the data volumes and subject matters covered. This resource is not a replacement for portal-specific user research and design work…(More)”.