Principles for effective beneficial ownership disclosure


Open Ownership: “The Open Ownership Principles (OO Principles) are a framework for considering the elements that influence whether the implementation of reforms to improve the transparency of the beneficial ownership of corporate vehicles will lead to effective beneficial ownership disclosure, that is, it generates high-quality and reliable data, maximising usability for users.

The OO Principles are intended to support governments implementing effective beneficial ownership transparency reforms and guide international institutions, civil society, and private sector actors in understanding and supporting reforms. They are a tool to identify and separate issues affecting implementation, and they provide a framework for assessing and improving existing disclosure regimes. If implemented together, the OO Principles enable disclosure systems to generate actionable and usable data across the widest range of policy applications of beneficial ownership data.

The nine principles are interdependent, but can be broadly grouped by the three main ways they improve data. The DefinitionCoverage, and Detail principles enable data disclosure and collection. The Central registerAccess, and Structured data principles facilitate data storage and auditability. Finally, the VerificationUp-to-date and historical records, and Sanctions and enforcement principles improve data quality and reliability….Download January 2023 version (translated versions are forthcoming)”

Whole of government innovation


Report by Geoff Mulgan: ‘Whole of government’ approaches – that aim to mobilise and align many ministries and agencies around a common challenge – have a long history. There have been notable examples during major wars, and around attempts to digitize societies, to cut energy use and to respond to the COVID-19 pandemic.

This paper has been prepared as part of a European Commission programme which I’m chairing looking at ‘whole of government innovation’ and working with national governments to help them better align their actions.

My paper – linked below – looks at the lessons of history. It outlines the many tools governments can use to achieve cross-cutting goals, linking R&D to law, regulation and procurement, and collaborating with business, universities and civil society. It argues that it is unwise to rely only on committees and boards. It shows how these choices link to innovation strategy and funding, including the relevance of half a century of experiment with moon-shots and missions.

The paper describes how the organisational challenges vary depending on the nature of the task; why governments need to avoid common technology or ‘STI trap’, of focusing only on hardware and not on social arrangements or business models; why constellations and flotillas of coordination are usually more realistic than true ‘whole of government approaches; the importance of mobilising hearts and minds as well as money and command.

Finally, it addresses the relevance of different approaches to current tasks such as the achievement of a net zero economy and society. The paper is shared as a working document – I’m keen to find new examples and approaches…(More)”.

How the Digital Transformation Changed Geopolitics


Paper by Dan Ciuriak: “In the late 2000s, a set of connected technological developments – introduction of the iPhone, deep learning through stacked neural nets, and application of GPUs to neural nets – resulted in the generation of truly astronomical amounts of data and provided the tools to exploit it. As the world emerged from the Great Financial Crisis of 2008-2009, data was decisively transformed from a mostly valueless by-product – “data exhaust” – to the “new oil”, the essential capital asset of the data-driven economy, and the “new plutonium” when deployed in social and political applications. This economy featured steep economies of scale, powerful economies of scope, network externalities in many applications, and pervasive information asymmetry. Strategic commercial policies at the firm and national levels were incentivized by the newfound scope to capture economic rents, destabilizing the rules-based system for trade and investment. At the same time, the new disruptive general-purpose technologies built on the nexus of Big Data, machine learning and artificial intelligence reconfigured geopolitical rivalry in several ways: by shifting great power rivalry onto new and critical grounds on which none had a decisive established advantage; by creating new vulnerabilities to information warfare in societies, especially open societies; and by enhancing the tools for social manipulation and the promotion of political personality cults. Machine learning, which essentially industrialized the very process of learning, drove an acceleration in the pace of innovation, which precipitated industrial policies driven by the desire to capture first mover advantage and by the fear of falling behind.

These developments provide a unifying framework to understand the progressive unravelling of the US-led global system as the decade unfolded, despite the fact that all the major innovations that drove the transition were within the US sphere and the US enjoyed first mover advantages. This is in stark contrast to the previous major economic transition to the knowledge-based economy, in which case US leadership on the key innovations extended its dominance for decades and indeed powered its rise to its unipolar moment. The world did not respond well to the changed technological and economic conditions and hence we are war: hot war, cold war, technological war, trade war, social war, and internecine political war. This paper focuses on the role of technological and economic conditions in shaping geopolitics, which is critical to understand if we are to respond to the current world disorder and to prepare to handle the coming transition in technological and economic conditions to yet another new economic era based on machine knowledge capital…(More)”.

Urban AI Guide


Guide by Popelka, S., Narvaez Zertuche, L., Beroche, H.: “The idea for this guide arose from conversations with city leaders, who were confronted with new technologies, like artificial intelligence, as a means of solving complex urban problems, but who felt they lacked the background knowledge to properly engage with and evaluate the solutions. In some instances, this knowledge gap produced a barrier to project implementation or led to unintended project outcomes.

The guide begins with a literature review, presenting the state of the art in research on urban artificial intelligence. It then diagrams and describes an “urban AI anatomy,” outlining and explaining the components that make up an urban AI system. Insights from experts in the Urban AI community enrich this section, illuminating considerations involved in each component. Finally, the guide concludes with an in-depth examination of three case studies: water meter lifecycle in Winnipeg, Canada, curb digitization and planning in Los Angeles, USA, and air quality monitoring in Vilnius, Lithuania. Collectively, the case studies highlight the diversity of ways in which artificial intelligence can be operationalized in urban contexts, as well as the steps and requirements necessary to implement an urban AI project.

Since the field of urban AI is constantly evolving, we anticipate updating the guide annually. Please consider filling out the contribution form, if you have an urban AI use case that has been operationalized. We may contact you to include the use case as a case study in a future edition of the guide.

As a continuation of the guide, we offer customized workshops on urban AI, oriented toward municipalities and other urban stakeholders, who are interested in learning more about how artificial intelligence interacts in urban environments. Please contact us if you would like more information on this program…(More)”.

Democracy Report 2023: Defiance in the Face of Autocratization


New report by Varieties of Democracy (V-Dem): “.. the largest global dataset on democracy with over 31 million data points for 202 countries from 1789 to 2022. Involving almost 4,000 scholars and other country experts, V-Dem measures hundreds of different attributes of democracy. V-Dem enables new ways to study the nature, causes, and consequences of democracy embracing its multiple meanings. THE FIRST SECTION of the report shows global levels of democ- racy sliding back and advances made over the past 35 years diminishing. Most of the drastic changes have taken place within the last ten years, while there are large regional variations in relation to the levels of democracy people experience. The second section offers analyses on the geographies and population sizes of democratizing and autocratizing countries. In the third section we focus on the countries undergoing autocratization, and on the indicators deteriorating the most, including in relation to media censorship, repression of civil society organizations, and academic freedom. While disinformation, polarization, and autocratization reinforce each other, democracies reduce the spread of disinformation. This is a sign of hope, of better times ahead. And this is precisely the message carried forward in the fourth section, where we switch our focus to examples of countries that managed to push back and where democracy resurfaces again. Scattered over the world, these success stories share common elements that may bear implications for international democracy support and protection efforts. The final section of this year’s report offers a new perspective on shifting global balances of economic and trade power as a result of autocratization…(More)”.

The Future of Compute


Independent Review by a UK Expert Panel: “…Compute is a material part of modern life. It is among the critical technologies lying behind innovation, economic growth and scientific discoveries. Compute improves our everyday lives. It underpins all the tools, services and information we hold on our handheld devices – from search engines and social media, to streaming services and accurate weather forecasts. This technology may be invisible to the public, but life today would be very different without it.

Sectors across the UK economy, both new and old, are increasingly reliant upon compute. By leveraging the capability that compute provides, businesses of all sizes can extract value from the enormous quantity of data created every day; reduce the cost and time required for research and development (R&D); improve product design; accelerate decision making processes; and increase overall efficiency. Compute also enables advancements in transformative technologies, such as AI, which themselves lead to the creation of value and innovation across the economy. This all translates into higher productivity and profitability for businesses and robust economic growth for the UK as a whole.

Compute powers modelling, simulations, data analysis and scenario planning, and thereby enables researchers to develop new drugs; find new energy sources; discover new materials; mitigate the effects of climate change; and model the spread of pandemics. Compute is required to tackle many of today’s global challenges and brings invaluable benefits to our society.

Compute’s effects on society and the economy have already been and, crucially, will continue to be transformative. The scale of compute capabilities keeps accelerating at pace. The performance of the world’s fastest compute has grown by a factor of 626 since 2010. The compute requirements of the largest machine learning models has grown 10 billion times over the last 10 years. We expect compute demand to significantly grow as compute capability continues to increase. Technology today operates very differently to 10 years ago and, in a decade’s time, it will have changed once again.

Yet, despite compute’s value to the economy and society, the UK lacks a long-term vision for compute…(More)”.

Suspicion Machines


Lighthouse Reports: “Governments all over the world are experimenting with predictive algorithms in ways that are largely invisible to the public. What limited reporting there has been on this topic has largely focused on predictive policing and risk assessments in criminal justice systems. But there is an area where even more far-reaching experiments are underway on vulnerable populations with almost no scrutiny.

Fraud detection systems are widely deployed in welfare states ranging from complex machine learning models to crude spreadsheets. The scores they generate have potentially life-changing consequences for millions of people. Until now, public authorities have typically resisted calls for transparency, either by claiming that disclosure would increase the risk of fraud or to protect proprietary technology.

The sales pitch for these systems promises that they will recover millions of euros defrauded from the public purse. And the caricature of the benefit cheat is a modern take on the classic trope of the undeserving poor and much of the public debate in Europe — which has the most generous welfare states — is intensely politically charged.

The true extent of welfare fraud is routinely exaggerated by consulting firms, who are often the algorithm vendors, talking it up to near 5 percent of benefits spending while some national auditors’ offices estimate it at between 0.2 and 0.4 of spending. Distinguishing between honest mistakes and deliberate fraud in complex public systems is messy and hard.

When opaque technologies are deployed in search of political scapegoats the potential for harm among some of the poorest and most marginalised communities is significant.

Hundreds of thousands of people are being scored by these systems based on data mining operations where there has been scant public consultation. The consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out…(More)”.

The Expanding Use of Technology to Manage Migration


Report by ​Marti Flacks , Erol Yayboke , Lauren Burke and Anastasia Strouboulis: “Seeking to manage growing flows of migrants, the United States and European Union have dramatically expanded their engagement with migration origin and transit countries. This increasingly includes supporting the deployment of sophisticated technology to understand, monitor, and influence the movement of people across borders, expanding the spheres of interest to include the movement of people long before they reach U.S. and European borders.

This report from the CSIS Human Rights Initiative and CSIS Project on Fragility and Mobility examines two case studies of migration—one from Central America toward the United States and one from West and North Africa toward Europe—to map the use and export of migration management technologies and the associated human rights risks. Authors Marti Flacks, Erol Yayboke, Lauren Burke, and Anastasia Strouboulis provide recommendations for origin, transit, and destination governments on how to incorporate human rights considerations into their decisionmaking on the use of technology to manage migration…(More)”.

Toward a 21st Century National Data Infrastructure: Mobilizing Information for the Common Good


Report by National Academies of Sciences, Engineering, and Medicine: “Historically, the U.S. national data infrastructure has relied on the operations of the federal statistical system and the data assets that it holds. Throughout the 20th century, federal statistical agencies aggregated survey responses of households and businesses to produce information about the nation and diverse subpopulations. The statistics created from such surveys provide most of what people know about the well-being of society, including health, education, employment, safety, housing, and food security. The surveys also contribute to an infrastructure for empirical social- and economic-sciences research. Research using survey-response data, with strict privacy protections, led to important discoveries about the causes and consequences of important societal challenges and also informed policymakers. Like other infrastructure, people can easily take these essential statistics for granted. Only when they are threatened do people recognize the need to protect them…(More)”.

Americans Can’t Consent to Companies Use of their Data


A Report from the Annenberg School for Communication: “Consent has always been a central part of Americans’ interactions with the commercial internet. Federal and state laws, as well as decisions from the Federal Trade Commission (FTC), require either implicit (“opt out”) or explicit (“opt in”) permission from individuals for companies to take and use data about them. Genuine opt out and opt in consent requires that people have knowledge about commercial data-extraction practices as well as a belief they can do something about them. As we approach the 30th anniversary of the commercial internet, the latest Annenberg national survey finds that Americans have neither. High percentages of Americans don’t know, admit they don’t know, and believe they can’t do anything about basic practices and policies around companies’ use of people’s data…
High levels of frustration, concern, and fear compound Americans’ confusion: 80% say they have little control over how marketers can learn about them online; 80% agree that what companies know about them from their online behaviors can harm them. These and related discoveries from our survey paint a picture of an unschooled and admittedly incapable society that rejects the internet industry’s insistence that people will accept tradeoffs for benefits and despairs of its inability to predictably control its digital life in the face of powerful corporate forces. At a time when individual consent lies at the core of key legal frameworks governing the collection and use of personal information, our findings describe an environment where genuine consent may not be possible….The aim of this report is to chart the particulars of Americans’ lack of knowledge about the commercial use of their data and their “dark resignation” in connection to it. Our goal is also to raise questions and suggest solutions about public policies that allow companies to gather, analyze, trade, and otherwise benefit from information they extract from large populations of people who are uninformed about how that information will be used and deeply concerned about the consequences of its use. In short, we find that informed consent at scale is a myth, and we urge policymakers to act with that in mind.”…(More)”.