Stefaan Verhulst
Andrea Saltelli et al at Nature: “The COVID-19 pandemic illustrates perfectly how the operation of science changes when questions of urgency, stakes, values and uncertainty collide — in the ‘post-normal’ regime.
Well before the coronavirus pandemic, statisticians were debating how to prevent malpractice such as p-hacking, particularly when it could influence policy1. Now, computer modelling is in the limelight, with politicians presenting their policies as dictated by ‘science’2. Yet there is no substantial aspect of this pandemic for which any researcher can currently provide precise, reliable numbers. Known unknowns include the prevalence and fatality and reproduction rates of the virus in populations. There are few estimates of the number of asymptomatic infections, and they are highly variable. We know even less about the seasonality of infections and how immunity works, not to mention the impact of social-distancing interventions in diverse, complex societies.
Mathematical models produce highly uncertain numbers that predict future infections, hospitalizations and deaths under various scenarios. Rather than using models to inform their understanding, political rivals often brandish them to support predetermined agendas. To make sure predictions do not become adjuncts to a political cause, modellers, decision makers and citizens need to establish new social norms. Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing2,3.
This is important because, when used appropriately, models serve society extremely well: perhaps the best known are those used in weather forecasting. These models have been honed by testing millions of forecasts against reality. So, too, have ways to communicate results to diverse users, from the Digital Marine Weather Dissemination System for ocean-going vessels to the hourly forecasts accumulated by weather.com. Picnickers, airline executives and fishers alike understand both that the modelling outputs are fundamentally uncertain, and how to factor the predictions into decisions.
Here we present a manifesto for best practices for responsible mathematical modelling. Many groups before us have described the best ways to apply modelling insights to policies, including for diseases4 (see also Supplementary information). We distil five simple principles to help society demand the quality it needs from modelling….(More)”.
United Nations: “As structural UN reforms consolidate, we are focused on building the data, digital, technology and innovation capabilities that the UN needs to succeed in the 21st century. The Secretary General’s “Data Strategy for Action by Everyone, Everywhere” is our agenda for the data-driven transformation.
Data permeates all aspects of our work, and its power—harnessed responsibly—is critical to the global agendas we serve. The UN family’s footprint, expertise and connectedness create unique opportunities to advance global “data action” with insight, impact and integrity. To help unlock more potential, 50 UN entities jointly designed this Strategy as a comprehensive playbook for data-driven change based on global best practice…
Our strategy pursues a simple idea: we focus not on process, but on learning, iteratively, to deliver data use cases that add value for stakeholders based on our vision, outcomes and principles. Use cases – purposes for which data is used – already permeate our organization. We will systematically identify and deliver them through dedicated data action portfolios. While new capabilities will in part emerge through “learning by doing”, we will also strengthen organizational enablers to deliver on our vision, including shifts in people and culture, partnerships, data governance and technology….(More)”.

The Economist: “In 1993 this newspaper told the world to watch the skies. At the time, humanity’s knowledge of asteroids that might hit the Earth was woefully inadequate. Like nuclear wars and large volcanic eruptions, the impacts of large asteroids can knock seven bells out of the climate; if one thereby devastated a few years’ worth of harvests around the globe it would kill an appreciable fraction of the population. Such an eventuality was admittedly highly unlikely. But given the consequences, it made actuarial sense to see if any impact was on the cards, and at the time no one was troubling themselves to look.
Asteroid strikes were an extreme example of the world’s wilful ignorance, perhaps—but not an atypical one. Low-probability, high-impact events are a fact of life. Individual humans look for protection from them to governments and, if they can afford it, insurers. Humanity, at least as represented by the world’s governments, reveals instead a preference to ignore them until forced to react—even when foresight’s price-tag is small. It is an abdication of responsibility and a betrayal of the future.
Covid-19 offers a tragic example. Virologists, epidemiologists and ecologists have warned for decades of the dangers of a flu-like disease spilling over from wild animals. But when sars–cov-2 began to spread very few countries had the winning combination of practical plans, the kit those plans required in place and the bureaucratic capacity to enact them. Those that did benefited greatly. Taiwan has, to date, seen just seven covid-19 deaths; its economy has suffered correspondingly less.
Pandemics are disasters that governments have experience of. What therefore of truly novel threats? The blazing hot corona which envelops the Sun—seen to spectacular effect during solar eclipses—intermittently throws vast sheets of charged particles out into space. These cause the Northern and Southern Lights and can mess up electric grids and communications. But over the century or so in which electricity has become crucial to much of human life, the Earth has never been hit by the largest of these solar eructations. If a coronal mass ejection (cme) were to hit, all sorts of satellite systems needed for navigation, communications and warnings of missile attacks would be at risk. Large parts of the planet could face months or even years without reliable grid electricity (see Briefing). The chances of such a disaster this century are put by some at better than 50:50. Even if they are not that high, they are still higher than the chances of a national leader knowing who in their government is charged with thinking about such things.
The fact that no governments have ever seen a really big cme, or a volcanic eruption large enough to affect harvests around the world—the most recent was Tambora, in 1815—may explain their lack of forethought. It does not excuse it. Keeping an eye on the future is part of what governments are for. Scientists have provided them with the tools for such efforts, but few academics will undertake the work unbidden, unfunded and unsung. Private business may take some steps when it perceives specific risks, but it will not put together plans for society at large….(More)”.
Report for the European Parliament: “A vast range of AI applications are being implemented by European industry, which can be broadly grouped into two categories: i) applications that enhance the performance and efficiency of processes through mechanisms such as intelligent monitoring, optimisation and control; and ii) applications that enhance human-machine collaboration.
At present, such applications are being implemented across a broad range of European industrial sectors. However, some sectors (e.g. automotive, telecommunications, healthcare) are more advanced in AI deployment than others (e.g. paper and pulp, pumps, chemicals). The types of AI applications
implemented also differ across industries. In less digitally mature sectors, clear barriers to adoption have been identified, including both internal (e.g. cultural resistance, lack of skills, financial considerations) and external (e.g. lack of venture capital) barriers. For the most part, and especially for SMEs, barriers to the adoption of AI are similar to those hindering digitalisation. The adoption of such AI applications is anticipated to deliver a wide range of positive impacts, for individual firms, across value chains, as well as at the societal and macroeconomic levels. AI applications can bring efficiency, environmental and economic benefits related to increased production output and quality, reduced maintenance costs, improved energy efficiency, better use of raw materials and reduced waste. In addition, AI applications can add value through product personalisation, improve customer service and contribute to the development of new product classes, business models and even sectors. Workforce benefits (e.g. improved workplace safety) are also being delivered by AI applications.
Alongside these firm-level benefits and opportunities, significant positive societal and economy-wide impacts are envisaged. More specifically, substantial increases in productivity, innovation, growth and job creation have been forecasted. For example, one estimate anticipates labour productivity increases of 11-37% by 2035. In addition, AI is expected to positively contribute to the UN Sustainable Development Goals and the capabilities of AI and machine learning to address major health challenges, such as the current COVID-19 health pandemic, are also noteworthy. For instance, AI systems have the potential to accelerate the lead times for the development of vaccines and drugs.
However, AI adoption brings a range of challenges…(More)”.
Article by Jon Simonsson, Chair of the Committee for Technological Innovation and Ethics (Komet) in Sweden: “People have said that in the present – the fourth industrial revolution – everything is possible. The ingredients are there – 5G, IoT, AI, drones and self-driving vehicles – as well as advanced knowledge about diagnosis and medication – and they are all rapidly evolving. Only the innovator sets the limitations for how to mix and bake with Technologies.
And right now, when the threat of the corona virus has almost shock-digitized both business and the public sector, the interest in new technology solutions has skyrocketed. Working remotely, moving things without human presence, or – most important – virus vaccines and medical treatment methods, have all become self-evident areas for intensified research and experimentation. But the laws and regulations surrounding these areas were often created for a completely different setting.
Rules are good. And there are usually very good reasons why an area is regulated. Some rules are intended to safeguard democratic rights or individual rights to privacy, others to control developments in a certain direction. The rules are required. Especially at the present when not only development of technology but also the technology uptake in society is accelerating. It takes time to develop laws and regulations, and the process of doing so is not in pace with the rapid development of technology. This creates risks in society. For example, risks related to the individual’s right to privacy, the economy or the environment. At the same time, gaps in regulation may be revealed, gaps that could lead to introduction of new and perhaps not desired solutions.

Would it be possible to find a middle ground and a more future oriented way to work with regulation? With rules that are clear, future-proof and developed with legally safe methods, but encourages and facilitates ethical and sustainable innovation?
Responsible development and use of new technology
The Government wants Sweden to be a leader in the responsible development and use of new technologies. The Swedish Committee for Technological Innovation and Ethics (Komet) works with policy development to create good conditions for innovation and competitiveness, while ensuring that development and dissemination of new technology is safe and secure. The Committee helps the Swedish government to proactively address improvements technology could create for citizens, business and society, but also to highlight the conflicting goals that may arise.
This includes raising ethical issues related to the rapid technological development. When almost everything is possible, we need to place particularly high demands on the compass, how we responsibly navigate the technology landscape. Not least during the corona pandemic, when we have seen how ethical boundaries have been moved for the use of surveillance technology.
An important objective of the Komet work is to instil courage in the public sector. Although innovators are often private, at the end of the day, it is the public sector that must enable, be willing to and dare to meet the demands of both business and society. It is the public sector’s role to ensure that the proper regulations are on the table. A balanced and future-oriented regulation which will be required for rapidly creating a sustainable world….(More)”.
Book by David Stasavage: “Historical accounts of democracy’s rise tend to focus on ancient Greece and pre-Renaissance Europe. The Decline and Rise of Democracy draws from global evidence to show that the story is much richer—democratic practices were present in many places, at many other times, from the Americas before European conquest, to ancient Mesopotamia, to precolonial Africa. Delving into the prevalence of early democracy throughout the world, David Stasavage makes the case that understanding how and where these democracies flourished—and when and why they declined—can provide crucial information not just about the history of governance, but also about the ways modern democracies work and where they could manifest in the future.
Drawing from examples spanning several millennia, Stasavage first considers why states developed either democratic or autocratic styles of governance and argues that early democracy tended to develop in small places with a weak state and, counterintuitively, simple technologies. When central state institutions (such as a tax bureaucracy) were absent—as in medieval Europe—rulers needed consent from their populace to govern. When central institutions were strong—as in China or the Middle East—consent was less necessary and autocracy more likely. He then explores the transition from early to modern democracy, which first took shape in England and then the United States, illustrating that modern democracy arose as an effort to combine popular control with a strong state over a large territory. Democracy has been an experiment that has unfolded over time and across the world—and its transformation is ongoing.
Amidst rising democratic anxieties, The Decline and Rise of Democracy widens the historical lens on the growth of political institutions and offers surprising lessons for all who care about governance….(More)”.
Article by Simine Vazire: “THE RUSH FOR scientific cures and treatments for Covid-19 has opened the floodgates of direct communication between scientists and the public. Instead of waiting for their work to go through the slow process of peer review at scientific journals, scientists are now often going straight to print themselves, posting write-ups of their work to public servers as soon as they’re complete. This disregard for the traditional gatekeepers has led to grave concerns among both scientists and commentators: Might not shoddy science—and dangerous scientific errors—make its way into the media, and spread before an author’s fellow experts can correct it? As two journalism professors suggested in an op-ed last month for The New York Times, it’s possible the recent spread of so-called preprints has only “sown confusion and discord with a general public not accustomed to the high level of uncertainty inherent in science.”
There’s another way to think about this development, however. Instead of showing (once again) that formal peer review is vital for good science, the last few months could just as well suggest the opposite. To me, at least—someone who’s served as an editor at seven different journals, and editor in chief at two—the recent spate of decisions to bypass traditional peer review gives the lie to a pair of myths that researchers have encouraged the public to believe for years: First, that peer-reviewed journals publish only trustworthy science; and second, that trustworthy science is published only in peer-reviewed journals.
Scientists allowed these myths to spread because it was convenient for us. Peer-reviewed journals came into existence largely to keep government regulators off our backs. Scientists believe that we are the best judges of the validity of each other’s work. That’s very likely true, but it’s a huge leap from that to “peer-reviewed journals publish only good science.” The most selective journals still allow flawed studies—even really terribly flawed ones—to be published all the time. Earlier this month, for instance, the journal Proceedings of the National Academy of Sciences put out a paper claiming that mandated face coverings are “the determinant in shaping the trends of the pandemic.” PNAS is a very prestigious journal, and their website claims that they are an “authoritative source” that works “to publish only the highest quality scientific research.” However, this paper was quickly and thoroughly criticized on social media; by last Thursday, 45 researchers had signed a letter formally calling for its retraction.
Now the jig is up. Scientists are writing papers that they want to share as quickly as possible, without waiting the months or sometimes years it takes to go through journal peer review. So they’re ditching the pretense that journals are a sure-fire quality control filter, and sharing their papers as self-published PDFs. This might be just the shakeup that peer review needs….(More)”.
Federica Carugati at Wired: “…A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like.
We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off.
Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance.
Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change, and even to shape constitutions. Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions.
The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice.
These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front.
A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency….(More)”.
Book edited by Frank Ridzi, Chantal Stevens and Melanie Davern: “This book offers critical insights into the thriving international field of community indicators, incorporating the experiences of government leaders, philanthropic professionals, community planners and a wide range of academic disciplines. It illuminates the important role of community indicators in diverse settings and the rationale for the development and implementation of these innovative projects. This book details many of the practical “how to” aspects of the field as well as lessons learned from implementing indicators in practice.
The case studies included here also demonstrate how, using a variety of data applications, leaders of today are monitoring and measuring progress and communities are empowered to make sustainable improvements in their wellbeing. With examples related to the environment, economy, planning, community engagement and health, among others, this book epitomizes the constant innovation, collaborative partnerships and the consummate interdisciplinarity of the community indicators field of today….(More)”.
Report by the Select Committee on Democracy and Digital Technologies (UK Parliament): “Democracy faces a daunting new challenge. The age where electoral activity was conducted through traditional print media, canvassing and door knocking, is rapidly vanishing. Instead it is dominated by digital and social media. They are now the source from which voters get most of their information and political messaging.
The digital and social media landscape is dominated by two behemoths–Facebook and Google. They largely pass under the radar, operating outside the rules that govern electoral politics. This has become acutely obvious in the current COVID-19 pandemic where online misinformation poses not only a real and present danger to our democracy but also to our lives. Governments have been dilatory in adjusting regulatory regimes to capture these new realities. The result is a crisis of trust.
Yet our profound belief is that this can change. Technology is not a force of nature. Online platforms are not inherently ungovernable. They can and should be bound by the same restraints that we apply to the rest of society. If this is done well, in the ways we spell out in this Report, technology can become a servant of democracy rather than its enemy. There is a need for Government leadership and regulatory capacity to match the scale and pace of challenges and opportunities that the online world presents.
The Government’s Online Harms programme presents a significant first step towards this goal. It needs to happen; it needs to happen fast; and the necessary draft legislation must be laid before Parliament for scrutiny without delay. The Government must not flinch in the face of the inevitable and powerful lobbying of Big Tech and others that benefit from the current situation.
Well drafted Online Harms legislation can do much to protect our democracy. Issues such as misinformation and disinformation must be included in the Bill. The Government must make sure that online platforms bear ultimate responsibility for the content that their algorithms promote. Where harmful content spreads virally on their service or where it is posted by users with a large audience, they should face sanctions over their output as other broadcasters do.
Individual users need greater protection. They must have redress against large platforms through an ombudsman tasked with safeguarding the rights of citizens.
Transparency of online platforms is essential if democracy is to flourish. Platforms like Facebook and Google seek to hide behind ‘black box’ algorithms which choose what content users are shown. They take the position that their decisions are not responsible for harms that may result from online activity. This is plain wrong. The decisions platforms make in designing and training these algorithmic systems shape the conversations that happen online. For this reason, we recommend that platforms be mandated to conduct audits to show how in creating these algorithms they have ensured, for example, that they are not discriminating against certain groups. Regulators must have the powers to oversee these decisions, with the right to acquire the information from platforms they need to exercise those powers….(More)”.