Knightian Uncertainty


Paper by Cass R. Sunstein: “In 1921, John Maynard Keynes and Frank Knight independently insisted on the importance of making a distinction between uncertainty and risk. Keynes referred to matters about which “there is no scientific basis on which to form any calculable probability whatever.” Knight claimed that “Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated.” Knightian uncertainty exists when people cannot assign probabilities to imaginable outcomes. People might know that a course of action might produce bad outcomes A, B, C, D, and E, without knowing much or anything about the probability of each. Contrary to a standard view in economics, Knightian uncertainty is real. Dogs face Knightian uncertainty; horses and elephants face it; human beings face it; in particular, human beings who make policy, or develop regulations, sometimes face it. Knightian uncertainty poses challenging and unresolved issues for decision theory and regulatory practice. It bears on many problems, potentially including those raised by artificial intelligence. It is tempting to seek to eliminate the worst-case scenario (and thus to adopt the maximin rule), but serious problems arise if eliminating the worst-case scenario would (1) impose high risks and costs, (2) eliminate large benefits or potential “miracles,” or (3) create uncertain risks…(More)”.

Uses and Purposes of Beneficial Ownership Data


Report by Andres Knobel: “This report describes more than 30 uses and purposes of beneficial ownership data *beyond anti-money laundering) for a whole-government approach. It covers 5 cases for exposing corruption, 6 cases for protecting democracy, the rule of law and national assets, 9 cases for exposing tax abuse, 4 cases for exposing fraud and administrative violations, 3 cases for protecting the environment and mitigating climate change, 5 cases for ensuring fair market conditions and 4 cases for creating fairer societies…(More)”.

The Transferability Question


Report by Geoff Mulgan: “How should we think about the transferability of ideas and methods? If something works in one place and one time, how do we know if it, or some variant of it, will work in another place or another time?

This – the transferability question – is one that many organisations face: businesses, from retailers and taxi firms to restaurants and accountants wanting to expand to other regions or countries; governments wanting to adopt and adapt policies from elsewhere; and professions like doctors, wanting to know whether a kind of surgery, or a smoking cessation programme, will work in another context…

Here I draw on this literature to suggest not so much a generalisable method but rather an approach that starts by asking four basic questions of any promising idea:  

  • SPREAD: has the idea already spread to diverse contexts and been shown to work?  
  • ESSENTIALS: do we know what the essentials are, the crucial ingredients that make it effective?  
  • EASE: how easy is it to adapt or adopt (in other words, how many other things need to change for it to be implemented successfully)? 
  • RELEVANCE: how relevant is the evidence (or how similar is the context of evidence to the context of action)? 

Asking these questions is a protection against the vice of hoping that you can just ‘cut and paste’ an idea from elsewhere, but also an encouragement to be hungry for good ideas that can be adopted or adapted.    

I conclude by arguing that it is healthy for any society or government to assume that there are good ideas that could adopted or adapted; it’s healthy to cultivate a hunger to learn; healthy to understand methods for analysing what aspects of an idea or model could be transferable; and great value in having institutions that are good at promoting and spreading ideas, at adoption and adaptation as well as innovation…(More)”.

The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now


Book by Hilke Schellmann: “Based on exclusive information from whistleblowers, internal documents, and real world test results, Emmy‑award winning Wall Street Journal contributor Hilke Schellmann delivers a shocking and illuminating expose on the next civil rights issue of our time: how AI has already taken over the workplace and shapes our future.
 

Hilke Schellmann, is an Emmy‑award winning investigative reporter, Wall Street Journal and Guardian contributor and Journalism Professor at NYU. In The Algorithm, she investigates the rise of artificial intelligence (AI) in the world of work. AI is now being used to decide who has access to an education, who gets hired, who gets fired, and who receives a promotion. Drawing on exclusive information from whistleblowers, internal documents and real‑world tests, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good. Algorithms are on the brink of dominating our lives and threaten our human future—if we don’t fight back. 
 
Schellmann takes readers on a journalistic detective story testing algorithms that have secretly analyzed job candidates’ facial expressions and tone of voice. She investigates algorithms that scan our online activity including Twitter and LinkedIn to construct personality profiles à la Cambridge Analytica. Her reporting reveals how employers track the location of their employees, the keystrokes they make, access everything on their screens and, during meetings, analyze group discussions to diagnose problems in a team. Even universities are now using predictive analytics for admission offers and financial aid…(More)”

Foundational Research Gaps and Future Directions for Digital Twins


Report by the National Academy of Engineering; National Academies of Sciences, Engineering, and Medicine: “Across multiple domains of science, engineering, and medicine, excitement is growing about the potential of digital twins to transform scientific research, industrial practices, and many aspects of daily life. A digital twin couples computational models with a physical counterpart to create a system that is dynamically updated through bidirectional data flows as conditions change. Going beyond traditional simulation and modeling, digital twins could enable improved medical decision-making at the individual patient level, predictions of future weather and climate conditions over longer timescales, and safer, more efficient engineering processes. However, many challenges remain before these applications can be realized.

This report identifies the foundational research and resources needed to support the development of digital twin technologies. The report presents critical future research priorities and an interdisciplinary research agenda for the field, including how federal agencies and researchers across domains can best collaborate…(More)”.

Charting the Emerging Geography of AI


Article by Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi: “Given the high stakes of this race, which countries are in the lead? Which are gaining on the leaders? How might this hierarchy shape the future of AI? Identifying AI-leading countries is not straightforward, as data, knowledge, algorithms, and models can, in principle, cross borders. Even the U.S.–China rivalry is complicated by the fact that AI researchers from the two countries cooperate — and more so than researchers from any other pair of countries. Open-source models are out there for everyone to use, with licensing accessible even for cutting-edge models. Nonetheless, AI development benefits from scale economies and, as a result, is geographically clustered as many significant inputs are concentrated and don’t cross borders that easily….

Rapidly accumulating pools of data in digital economies around the world are clearly one of the critical drivers of AI development. In 2019, we introduced the idea of “gross data product” of countries determined by the volume, complexity, and accessibility of data consumed alongside the number of active internet users in the country. For this analysis, we recognized that gross data product is an essential asset for AI development — especially for generative AI, which requires massive, diverse datasets — and updated the 2019 analyses as a foundation, adding drivers that are critical for AI development overall. That essential data layer makes the index introduced here distinct from other indicators of AI “vibrancy” or measures of global investments, innovations, and implementation of AI…(More)”.

Measuring Global Migration: Towards Better Data for All


Book by Frank Laczko, Elisa Mosler Vidal, Marzia Rango: “This book focuses on how to improve the collection, analysis and responsible use of data on global migration and international mobility. While migration remains a topic of great policy interest for governments around the world, there is a serious lack of reliable, timely, disaggregated and comparable data on it, and often insufficient safeguards to protect migrants’ information. Meanwhile, vast amounts of data about the movement of people are being generated in real time due to new technologies, but these have not yet been fully captured and utilized by migration policymakers, who often do not have enough data to inform their policies and programmes. The lack of migration data has been internationally recognized; the Global Compact for Safe, Orderly and Regular Migration urges all countries to improve data on migration to ensure that policies and programmes are “evidence-based”, but does not spell out how this could be done.

This book examines both the technical issues associated with improving data on migration and the wider political challenges of how countries manage the collection and use of migration data. The first part of the book discusses how much we really know about international migration based on existing data, and key concepts and approaches which are often used to measure migration. The second part of the book examines what measures could be taken to improve migration data, highlighting examples of good practice from around the world in recent years, across a range of different policy areas, such as health, climate change and sustainable development more broadly.

Written by leading experts on international migration data, this book is the perfect guide for students, policymakers and practitioners looking to understand more about the existing evidence base on migration and what can be done to improve it…(More)”. (See also: Big Data For Migration Alliance).

New group aims to professionalize AI auditing


Article by Louise Matsakis: “The newly formed International Association of Algorithmic Auditors (IAAA) is hoping to professionalize the sector by creating a code of conduct for AI auditors, training curriculums, and eventually, a certification program.

Over the last few years, lawmakers and researchers have repeatedly proposed the same solution for regulating artificial intelligence: require independent audits. But the industry remains a wild west; there are only a handful of reputable AI auditing firms and no established guardrails for how they should conduct their work.

Yet several jurisdictions have passed laws mandating tech firms to commission independent audits, including New York City. The idea is that AI firms should have to demonstrate their algorithms work as advertised, the same way companies need to prove they haven’t fudged their finances.

Since ChatGPT was released last year, a troubling norm has been established in the AI industry, which is that it’s perfectly acceptable to evaluate your own models in-house.

Leading startups like OpenAI and Anthropic regularly publish research about the AI systems they’re developing, including the potential risks. But they rarely commission independent audits, let alone publish the results, making it difficult for anyone to know what’s really happening under the hood…(More)”..(More)”

Conversing with Congress: An Experiment in AI-Enabled Communication


Blog by Beth Noveck: “Each Member of the US House Representative speaks for 747,184 people – a staggering increase from 50 years ago. In the Senate, this disproportion is even more pronounced: on average each Senator represents 1.6 million more constituents than her predecessor a generation ago. That’s a lower level of representation than any other industrialized democracy.  

As the population grows (over 60% since 1970), so, too, does constituent communications. 

But that communication is not working well. According to the Congressional Management Foundation, this overwhelming communication volume leads to dissatisfaction among voters who feel their views are not adequately considered by their representatives….A pioneering and important new study published in Government Information Quarterly entitled “Can AI communication tools increase legislative responsiveness and trust in democratic institutions?” (Volume 40, Issue 3, June 2023, 101829) from two Cornell researchers is shedding new light on the practical potential for AI to create more meaningful constituent communication….Depending on the treatment group they either were or were not told when replies were AI-drafted.

Their findings are telling. Standard, generic responses fare poorly in gaining trust. In contrast, all AI-assisted responses, particularly those with human involvement, significantly boost trust. “Legislative correspondence generated by AI with human oversight may be received favorably.” 

Screenshot 2023 12 12 at 4.21.16 Pm

While the study found AI-assisted replies to be more trustworthy, it also explored how the quality of these replies impacts perception. When they conducted this study, ChatGPT was still in its infancy and more prone to linguistic hallucinations so they also tested in a second experiment how people perceived higher, more relevant and responsive replies against lower quality, irrelevant replies drafted with AI…(More)”.

A synthesis of evidence for policy from behavioral science during COVID-19


Paper by Kai Ruggeri et al: “Scientific evidence regularly guides policy decisions, with behavioural science increasingly part of this process. In April 2020, an influential paper proposed 19 policy recommendations (‘claims’) detailing how evidence from behavioural science could contribute to efforts to reduce impacts and end the COVID-19 pandemic. Here we assess 747 pandemic-related research articles that empirically investigated those claims. We report the scale of evidence and whether evidence supports them to indicate applicability for policymaking. Two independent teams, involving 72 reviewers, found evidence for 18 of 19 claims, with both teams finding evidence supporting 16 (89%) of those 18 claims. The strongest evidence supported claims that anticipated culture, polarization and misinformation would be associated with policy effectiveness. Claims suggesting trusted leaders and positive social norms increased adherence to behavioural interventions also had strong empirical support, as did appealing to social consensus or bipartisan agreement. Targeted language in messaging yielded mixed effects and there were no effects for highlighting individual benefits or protecting others. No available evidence existed to assess any distinct differences in effects between using the terms ‘physical distancing’ and ‘social distancing’. Analysis of 463 papers containing data showed generally large samples; 418 involved human participants with a mean of 16,848 (median of 1,699). That statistical power underscored improved suitability of behavioural science research for informing policy decisions. Furthermore, by implementing a standardized approach to evidence selection and synthesis, we amplify broader implications for advancing scientific evidence in policy formulation and prioritization…(More)”