From Poisons to Antidotes: Algorithms as Democracy Boosters


Paper by Paolo Cavaliere and Graziella Romeo: “Under what conditions can artificial intelligence contribute to political processes without undermining their legitimacy? Thanks to the ever-growing availability of data and the increasing power of decision-making algorithms, the future of political institutions is unlikely to be anything similar to what we have known throughout the last century, possibly with Parliaments deprived of their traditional authority and public decision-making processes largely unaccountable. This paper discusses and challenges these concerns by suggesting a theoretical framework under which algorithmic decision-making is compatible with democracy and, most relevantly, can offer a viable solution to counter the rise of populist rhetoric in the governance arena. Such a framework is based on three pillars: a. understanding the civic issues that are subjected to automated decision-making; b. controlling the issues that are assigned to AI; and c. evaluating and challenging the outputs of algorithmic decision-making….(More)”.

What Works? Developing a global evidence base for public engagement


Report by Reema Patel and Stephen Yeo: “…the Wellcome Trust commissioned OTT Consulting to recommend the best approach for enabling public engagement communities to share and gather evidence on public engagement practice globally, and in particular to assess the suitability of an approach adapted from the UK ‘What Works Centres’. This report is the output from that commission. It draws from a desk-based literature review, workshops in India, Peru and the UK, and a series of stakeholder interviews with international organisations.

The key themes that emerged from stakeholder interviews and workshops were that, in order for evidence about public engagement to help inform and shape public engagement practice, and for public engagement to be used and deployed effectively, there has to be an approach that can: understand the audiences, broaden out how ‘evidence’ is understood and generated, think strategically about how evidence affects and informs practice and understand the complexity of the system dynamics within which public engagement (and evidence about public engagement) operates….(More)”.

Trove of unique health data sets could help AI predict medical conditions earlier


Madhumita Murgia at the Financial Times: “…Ziad Obermeyer, a physician and machine learning scientist at the University of California, Berkeley, launched Nightingale Open Science last month — a treasure trove of unique medical data sets, each curated around an unsolved medical mystery that artificial intelligence could help to solve.

The data sets, released after the project received $2m of funding from former Google chief executive Eric Schmidt, could help to train computer algorithms to predict medical conditions earlier, triage better and save lives.

The data include 40 terabytes of medical imagery, such as X-rays, electrocardiogram waveforms and pathology specimens, from patients with a range of conditions, including high-risk breast cancer, sudden cardiac arrest, fractures and Covid-19. Each image is labelled with the patient’s medical outcomes, such as the stage of breast cancer and whether it resulted in death, or whether a Covid patient needed a ventilator.

Obermeyer has made the data sets free to use and mainly worked with hospitals in the US and Taiwan to build them over two years. He plans to expand this to Kenya and Lebanon in the coming months to reflect as much medical diversity as possible.

“Nothing exists like it,” said Obermeyer, who announced the new project in December alongside colleagues at NeurIPS, the global academic conference for artificial intelligence. “What sets this apart from anything available online is the data sets are labelled with the ‘ground truth’, which means with what really happened to a patient and not just a doctor’s opinion.”…

The Nightingale data sets were among dozens proposed this year at NeurIPS.

Other projects included a speech data set of Mandarin and eight subdialects recorded by 27,000 speakers in 34 cities in China; the largest audio data set of Covid respiratory sounds, such as breathing, coughing and voice recordings, from more than 36,000 participants to help screen for the disease; and a data set of satellite images covering the entire country of South Africa from 2006 to 2017, divided and labelled by neighbourhood, to study the social effects of spatial apartheid.

Elaine Nsoesie, a computational epidemiologist at the Boston University School of Public Health, said new types of data could also help with studying the spread of diseases in diverse locations, as people from different cultures react differently to illnesses.

She said her grandmother in Cameroon, for example, might think differently than Americans do about health. “If someone had an influenza-like illness in Cameroon, they may be looking for traditional, herbal treatments or home remedies, compared to drugs or different home remedies in the US.”

Computer scientists Serena Yeung and Joaquin Vanschoren, who proposed that research to build new data sets should be exchanged at NeurIPS, pointed out that the vast majority of the AI community still cannot find good data sets to evaluate their algorithms. This meant that AI researchers were still turning to data that were potentially “plagued with bias”, they said. “There are no good models without good data.”…(More)”.

Cities and the Climate-Data Gap


Article by Robert Muggah and Carlo Ratti: “With cities facing disastrous climate stresses and shocks in the coming years, one would think they would be rushing to implement mitigation and adaptation strategies. Yet most urban residents are only dimly aware of the risks, because their cities’ mayors, managers, and councils are not collecting or analyzing the right kinds of information.

With more governments adopting strategies to reduce greenhouse-gas (GHG) emissions, cities everywhere need to get better at collecting and interpreting climate data. More than 11,000 cities have already signed up to a global covenant to tackle climate change and manage the transition to clean energy, and many aim to achieve net-zero emissions before their national counterparts do. Yet virtually all of them still lack the basic tools for measuring progress.

Closing this gap has become urgent, because climate change is already disrupting cities around the world. Cities on almost every continent are being ravaged by heat waves, fires, typhoons, and hurricanes. Coastal cities are being battered by severe flooding connected to sea-level rise. And some megacities and their sprawling peripheries are being reconsidered altogether, as in the case of Indonesia’s $34 billion plan to move its capital from Jakarta to Borneo by 2024.

Worse, while many subnational governments are setting ambitious new green targets, over 40% of cities (home to some 400 million people) still have no meaningful climate-preparedness strategy. And this share is even lower in Africa and Asia – where an estimated 90% of all future urbanization in the next three decades is expected to occur.

We know that climate-preparedness plans are closely correlated with investment in climate action including nature-based solutions and systematic resilience. But strategies alone are not enough. We also need to scale up data-driven monitoring platforms. Powered by satellites and sensors, these systems can track temperatures inside and outside buildings, alert city dwellers to air-quality issues, and provide high-resolution information on concentrations of specific GHGs (carbon dioxide and nitrogen dioxide) and particulate matter…(More)”.

Empowering AI Leadership: AI C-Suite Toolkit


Toolkit by the World Economic Forum: “Artificial intelligence (AI) is one of the most important technologies for business, the economy and society and a driving force behind the Fourth Industrial Revolution. C-suite executives need to understand its possibilities and risks. This requires a multifaceted approach and holistic grasp of AI, spanning technical, organizational, regulatory, societal and also philosophical aspects. This toolkit provides a one-stop place for corporate executives to identify and understand the multiple and complex issues that AI raises for their business and society. It provides a practical set of tools to help them comprehend AI’s impact on their roles, ask the right questions, identify the key trade-offs and make informed decisions on AI strategy, projects and implementations…(More)”.

Surveillance Publishing


Working paper by Jefferson D. Pooley: “…This essay lingers on a prediction too: Clarivate’s business model is coming for scholarly publishing. Google is one peer, but the company’s real competitors are Elsevier, Springer Nature, Wiley, Taylor & Francis, and SAGE. Elsevier, in particular, has been moving into predictive analytics for years now. Of course the publishing giants have long profited off of academics and our university employers— by packaging scholars’ unpaid writing-and-editing labor only to sell it back to us as usuriously priced subscriptions or APCs. That’s a lucrative business that Elsevier and the others won’t give up. But they’re layering another business on top of their legacy publishing operations, in the Clarivate mold. The data trove that publishers are sitting on is, if anything, far richer than the citation graph alone. Why worry about surveillance publishing? One reason is the balance-sheet, since the companies’ trading in academic futures will further pad profits at the expense of taxpayers and students. The bigger reason is that our behavior—once alienated from us and abstracted into predictive metrics—will double back onto our work lives. Existing biases, like male academics’ propensity for selfcitation, will receive a fresh coat of algorithmic legitimacy. More broadly, the academic reward system is already distorted by metrics. To the extent that publishers’ tallies and indices get folded into grant-making, tenure-and-promotion, and other evaluative decisions, the metric tide will gain power. The biggest risk is that scholars will internalize an analytics mindset, one already encouraged by citation counts and impact factors….(More)”.

Technology and the Global Struggle for Democracy


Essay by Manuel Muniz: “The commemoration of the first anniversary of the January 6, 2021, attack on the US Capitol by supporters of former President Donald Trump showed that the extreme political polarization that fueled the riot also frames Americans’ interpretations of it. It would, however, be gravely mistaken to view what happened as a uniquely American phenomenon with uniquely American causes. The disruption of the peaceful transfer of power that day was part of something much bigger.

As part of the commemoration, President Joe Biden said that a battle is being fought over “the soul of America.” What is becoming increasingly clear is that this is also true of the international order: its very soul is at stake. China is rising and asserting itself. Populism is widespread in the West and major emerging economies. And chauvinistic nationalism has re-emerged in parts of Europe. All signs point to increasing illiberalism and anti-democratic sentiment around the world.

Against this backdrop, the US hosted in December a (virtual) “Summit for Democracy” that was attended by hundreds of national and civil-society leaders. The message of the gathering was clear: democracies must assert themselves firmly and proactively. To that end, the summit devoted numerous sessions to studying the digital revolution and its potentially harmful implications for our political systems.

Emerging technologies pose at least three major risks for democracies. The first concerns how they structure public debate. Social networks balkanize public discourse by segmenting users into ever smaller like-minded communities. Algorithmically-driven information echo chambers make it difficult to build social consensus. Worse, social networks are not liable for the content they distribute, which means they can allow misinformation to spread on their platforms with impunity…(More)”.

Are we witnessing the dawn of post-theory science?


Essay by Laura Spinney: “Does the advent of machine learning mean the classic methodology of hypothesise, predict and test has had its day?..

Isaac Newton apocryphally discovered his second law – the one about gravity – after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship – one that could be expressed as an equation, F=ma – and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebook’s machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You can’t lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that – no theory, in a word. They just work and do so well. We witness the social effects of Facebook’s predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were – oversimplifications of reality. Soon, the old scientific method – hypothesise, predict, test – would be relegated to the dustbin of history. We’d stop looking for the causes of things and be satisfied with correlations.

With the benefit of hindsight, we can say that what Anderson saw is true (he wasn’t alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. “We have leapfrogged over our ability to even write the theories that are going to be useful for description,” says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. “We don’t even know what they would look like.”

But Anderson’s prediction of the end of theory looks to have been premature – or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: what’s the best way to acquire knowledge and where does science go from here?…(More)”

Our Common AI Future – A Geopolitical Analysis and Road Map, for AI Driven Sustainable Development, Science and Data Diplomacy


(Open Access) Book by Francesco Lapenta: “The premise of this concise but thorough book is that the future, while uncertain and open, is not arbitrary, but the result of a complex series of competing decisions, actors, and events that began in the past, have reached a certain configuration in the present, and will continue to develop into the future. These past and present conditions constitute the basis and origin of future developments that have the potential to shape into a variety of different possible, probable, undesirable or desirable future scenarios. The realisation that these future scenarios cannot be totally arbitrary gives scope to the study of the past, indispensable to fully understand the facts and actors and forces that contributed to the formation of the present, and how certain systems, or dominant models, came to be established (I). The relative openness of future scenarios gives scope to the study of what competing forces and models might exist, their early formation, actors, and initiatives (II) and how they may act as catalysts for alternative theories, models (III and IV) and actions that can influence our future and change its path (V)…

The analyses in the book, which are loosely divided into three phases, move from the past to the present, and begin with identifying best practices and some of the key initiatives that have attempted to achieve these global collaborative goals over the last few decades. Then, moving forward, they describe a roadmap to a possible future based on already existing and developing theories, initiatives, and tools that could underpin these global collaborative efforts in the specific areas of AI and Sustainable Development. In the Road Map for AI Driven Sustainable Development, the analyses identify and stand on the shoulders of a number of past and current global initiatives that have worked for decades to lay the groundwork for this alternative evolutionary and collaborative model. The title of this book directs, acknowledges, and encourages readers to engage with one of these pivotal efforts, the “Our Common Future” report, the Brundtland’s commission report which was published in 1987 by the World Commission on Environment and Development (WCED). Building on the report’s ambitious humanistic and socioeconomic landscape and ambitions, the analyses investigate a variety of existing and developing best practices that could lead to, or inspire, a shared scientific collaborative model for AI development. Based on the understanding that, despite political rivalry and competition, governments should collaborate on at least two fundamental issues: One, to establish a set of global “Red Lines” to prohibit the development and use of AIs in specific applications that might pose an ethical or existential threat to humanity and the planet. And two, create a set of “Green Zones” for scientific diplomacy and cooperation in order to capitalize on the opportunities that the impending AIs era may represent in confronting major collective challenges such as the health and climate crises, the energy crisis, and the sustainable development goals identified in the report and developed by other subsequent global initiatives…(More)”.

A time for humble governments


Essay by Juha Leppänen: “Let’s face it. During the last decade, liberal democracies have not been especially successful in steering societies through our urgent, collective problems. This is reflected in the 2021 Edelman Trust Barometer Spring Update: A World in Trauma: Democratic governments are less trusted in general by their own citizens. While some governments have fared better than others, the trend is clear…

Humility entails both a willingness to listen to different opinions, and a capacity to review one’s own actions in light of new insights. True humility does not need to be deferential. But embracing humility legitimises leadership by cultivating stronger relationships and greater trust among other political and societal stakeholders — particularly with those with different perspectives. In doing so, it can facilitate long-term action and ensure policies are much more resilient in the face of uncertainty.

There are several core steps to establishing humble governance:

  • Some common ground is better than none, so strike a thin consensus with the opposition around a broad framework goal. For example, consider carbon neutrality targets. To begin with, forging consensus does not require locking down on the details of how and what. Take emissions in agriculture. In this case all that is needed is general agreement that significant cuts in CO2 emissions in this sector are necessary in order to hit our national net zero goal. While this can be harder in extremely polarised environments, a thin consensus of some sort usually can be built on any problem that is already widely recognised — no matter how small. This is even the case in political environments dominated by populist leaders.
  • Devolve problem-solving systemically. First, set aside hammering out blueprints and focus on issuing a broad launch plan, backed by a robust process for governmental decision-making. Look for intelligent incentives to prompt collaboration. In the carbon neutrality example, this would begin by identifying where the most critical potential tensions or jurisdictional disputes lie. Since local stakeholders tend to want to resolve tensions locally, give them a clear role in the planning. Divide up responsibility for achieving goals across sectors of the economy, identify key stakeholders needed at the table in each sector, and create a procedure for reviewing progress. Collaboration can be incentivised by offering those who participate the ability, say, to influence future regulations, or by penalising those who refuse to take part.
  • Revise framework goals through robust feedback mechanisms. A truly humble government’s steering documents should be seen as living documents, rather than definitive blueprints. There should be regular consultation with stakeholders on progress, and elected representatives should review the progress on the original problem statement and how success is defined. Where needed, the government in power can use this process to decide whether to reopen discussions with the opposition about how to revise the current goals…(More)”.