Commit to transparent COVID data until the WHO declares the pandemic is over


Edouard Mathieu at Nature: “…There are huge inequalities in data reporting around the world. Most of my time over the past two years has been spent digging through official websites and social-media accounts of hundreds of governments and health authorities. Some governments still report official statistics in low-resolution images on Facebook or infrequent press conferences on YouTube — often because they lack resources to do better. Some countries, including China and Iran, have provided no files at all.

Sometimes, it’s a lack of awareness: government officials might think that a topline figure somewhere in a press release is sufficient. Sometimes, the problem is reluctance: publishing the first file would mean a flood of requests for more data that authorities can’t or won’t publish.

Some governments rushed to launch pandemic dashboards, often built as one-off jobs by hired contractors. Civil servants couldn’t upgrade them as the pandemic shifted and new metrics and charts became more relevant. I started building our global data set on COVID-19 vaccinations in 2021, but many governments didn’t supply data for weeks — sometimes months — after roll-outs because their dashboards couldn’t accommodate the data. Worse, they rarely supplied underlying data essential for others to download and produce their own analyses. (My team asked repeatedly.)

Over and over, I’ve seen governments emphasize making dashboards look good when the priority should be making data available. A simple text file would do. After all, research groups like mine and citizens with expertise in data-visualization tools are more than willing to create a useful website or mobile app. But to do so, we need the raw material in a machine-readable format….(More)”.

OECD Framework for the Classification of AI systems


OECD Digital Economy Paper: “As artificial intelligence (AI) integrates all sectors at a rapid pace, different AI systems bring different benefits and risks. In comparing virtual assistants, self-driving vehicles and video recommendations for children, it is easy to see that the benefits and risks of each are very different. Their specificities will require different approaches to policy making and governance. To help policy makers, regulators, legislators and others characterise AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective. It can be applied to the widest range of AI systems across the following dimensions: People & Planet; Economic Context; Data & Input; AI model; and Task & Output. Each of the framework’s dimensions has a subset of properties and attributes to define and assess policy implications and to guide an innovative and trustworthy approach to AI as outlined in the OECD AI Principles….(More)”.

Data Dissemination in the Digital Age


Report by PARIS21 and Open Data Watch: “…a first account of the state of data portals in national statistical offices.

Data portals form a critical link between producers and users of data; and they facilitate the use of data in evidence-based decision making. National statistical offices, supported by their international partners, have embraced data portals as a way to disseminate official data.

However, data portals need to be designed and implemented in a sustainable manner to be beneficial to end users. Currently, there is a lack of common principles and guidelines as to how these data portals should be set up and managed to guide institutions.

This report proposes a holistic method to evaluate data portals and proposes recommendations to improve their use and function…(More)”.

The World Uncertainty Index


Paper by Hites Ahir, Nicholas Bloom & Davide Furceri: “We construct the World Uncertainty Index (WUI) for an unbalanced panel of 143 individual countries on a quarterly basis from 1952. This is the frequency of the word “uncertainty” in the quarterly Economist Intelligence Unit country reports. Globally, the Index spikes around major events like the Gulf War, the Euro debt crisis, the Brexit vote and the COVID pandemic. The level of uncertainty is higher in developing countries but is more synchronized across advanced economies with their tighter trade and financial linkages. In a panel vector autoregressive setting we find that innovations in the WUI foreshadow significant declines in output. This effect is larger and more persistent in countries with lower institutional quality, and in sectors with greater financial constraints…(More)”.

The committeefication of collective action in Africa


Paper by Caroline Archambault and David Ehrhardt: “Over the last century, Africa has witnessed considerable committeefication, a process by which committees have become increasingly important to organise collective action. Throughout the continent, committees have come to preside over everything from natural resource management to cultural life, and from peacebuilding to community consultation. What has been the impact of this dramatic institutional change on the nature and quality of collective action? Drawing on decades of anthropological research and development work in East Africa – studying, working with and working in committees of various kinds – this article presents an approach to addressing this question.

We show how committees have surface features as well as deep functions, and that the impact of committeefication depends not only on their features and functions but also on the pathways through which they proliferate. On the surface, committees aim for inclusive and deliberative decision making, even if they vary in the specifics of their missions, membership, decision-making rules, and level of autonomy. But their deep functions can be quite different: a façade for accessing recognition or resources; a classroom for learning leadership skills; or a club for elites to pursue their shared interests. The impact of these features and functions depends on the pathways through which they grow: autonomous from existing forms of collective action; in synergistic cooperation; or in competition, possibly weakening or even destroying existing local institutions.

Community-based development interventions often rely heavily on committeefied collective action. This paper identifies the benefits that this strategy can have, but also shows its potential to weaken or even destroy existing forms of collective action. On that basis, we suggest that it is imperative to turn more systematic analytical attention to committees, and assess the extent to which they are delivering development or crippling collective action in the guise of democracy and deliberation…(More)”.

Effective and Trustworthy Implementation of AI Soft Law Governance


Introduction by Carlos Ignacio Gutierrez, Gary E. Marchant and Katina Michael: “This double special issue (together with the IEEE Technology and Society Magazine, Dec 2021) is dedicated to examining the governance of artificial intelligence (AI) through soft law. This kind of law is considered “soft” as opposed to “hard” because it comes in the form of governance programs whose goal is to create substantive expectations that are not directly enforceable by government [1], [2]. Soft law materializes out of necessity to enable a technological innovation to thrive and not be hampered by disparate heterogeneous practices that may negatively impact its trajectory, causing a premature “valley of death” exit scenario [3]. Soft laws are meant to be “just in time” to grant industry fundamental guidance when dealing with complex socio-technical assemblages that may have significant socio-legal implications upon diffusion into the market. Anticipatory governance is closely connected with soft law, in that intended and unintended consequences of a new technology may well be anticipated and proactively addressed [4].

Soft law’s role in governance is to influence the implementation of new technologies whose inception into society have outpaced hard law. Its usage is not meant to diminish the need for regulations, but rather be considered an interim solution when the roll-out of a new technology is happening rapidly, resisting the urge to create reactive and premature laws that may well take too long to enter legislation in a given state. Mutual agreement and conformance toward common goals and technical protocols through soft law among industry representatives, associated government agencies, auxiliary service providers, and other stakeholders, can lead to positive gains. Including the potential for societal acceptance of a new technology, especially where there are adequate provisions to safeguard the customer and the general public…(More)”.

Metrics at Work: Journalism and the Contested Meaning of Algorithms


Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.

Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.

Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide…(More)”.

Global Cooperation on Digital Governance and the Geoeconomics of New Technologies in a Multi-polar World


A special collection of papers by the Centre for International Governance Innovation (CIGI) and King’s College London (KCL) resulting from: “… a virtual conference as part of KCL’s Project for Peaceful Competition. It brought together an intellectually and geographically diverse group of experts to discuss the geoeconomics of new digital technologies and the prospects for governance of the technologies in a multi-polar world. The papers prepared for discussion at the conference are collected in this series. An introduction summarizes (in heavily abbreviated form) the principal analytical conclusions emerging from the conference, together with the main policy recommendations put forward by participants….(More)”.

Technology is revolutionizing how intelligence is gathered and analyzed – and opening a window onto Russian military activity around Ukraine


Craig Nazareth at The Conversation: “…Through information captured by commercial companies and individuals, the realities of Russia’s military posturing are accessible to anyone via internet search or news feed. Commercial imaging companies are posting up-to-the-minute, geographically precise images of Russia’s military forces. Several news agencies are regularly monitoring and reporting on the situation. TikTok users are posting video of Russian military equipment on rail cars allegedly on their way to augment forces already in position around Ukraine. And internet sleuths are tracking this flow of information.

This democratization of intelligence collection in most cases is a boon for intelligence professionals. Government analysts are filling the need for intelligence assessments using information sourced from across the internet instead of primarily relying on classified systems or expensive sensors high in the sky or arrayed on the planet.

However, sifting through terabytes of publicly available data for relevant information is difficult. Knowing that much of the data could be intentionally manipulated to deceive complicates the task.

Enter the practice of open-source intelligence. The U.S. director of national intelligence defines Open-Source Intelligence, or OSINT, as the collection, evaluation and analysis of publicly available information. The information sources include news reports, social media posts, YouTube videos and satellite imagery from commercial satellite operators.

OSINT communities and government agencies have developed best practices for OSINT, and there are numerous free tools. Analysts can use the tools to develop network charts of, for example, criminal organizations by scouring publicly available financial records for criminal activity.

Private investigators are using OSINT methods to support law enforcement, corporate and government needs. Armchair sleuths have used OSINT to expose corruption and criminal activity to authorities. In short, the majority of intelligence needs can be met through OSINT…

Even with OSINT best practices and tools, OSINT contributes to the information overload intelligence analysts have to contend with. The intelligence analyst is typically in a reactive mode trying to make sense of a constant stream of ambiguous raw data and information.

Machine learning, a set of techniques that allows computers to identify patterns in large amounts of data, is proving invaluable for processing OSINT information, particularly photos and videos. Computers are much faster at sifting through large datasets, so adopting machine learning tools and techniques to optimize the OSINT process is a necessity.

Identifying patterns makes it possible for computers to evaluate information for deception and credibility and predict future trends. For example, machine learning can be used to help determine whether information was produced by a human or by a bot or other computer program and whether a piece of data is authentic or fraudulent…(More)”.

Society won’t trust A.I. until business earns that trust


Article by François Candelon, Rodolphe Charme di Carlo and Steven D. Mills: “…The concept of a social license—which was born when the mining industry, and other resource extractors, faced opposition to projects worldwide—differs from the other rules governing A.I.’s use. Academics such as Leeora Black and John Morrison, in the book The Social License: How to Keep Your Organization Legitimate,define the social license as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.” 

The social license isn’t a document like a government permit; it’s a form of acceptance that companies must gain through consistent and trustworthy behavior as well as stakeholder interactions. Thus, a social license for A.I. will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates. 

Companies cannot award themselves social licenses; they will have to win them by proving they can be trusted. As Morrison argued in 2014, akin to the capability to dig a mine, the fact that an A.I.-powered solution is technologically feasible doesn’t mean that society will find its use morally and ethically acceptable. And losing the social license will have dire consequences, as natural resource companies, such as Shell and BP, have learned in the past…(More)”