The Power and Perils of the “Artificial Hand”: Considering AI Through the Ideas of Adam Smith


Speech by Gita Gopinath: “…Nowadays, it’s almost impossible to talk about economics without invoking Adam Smith. We take for granted many of his concepts, such as the division of labor and the invisible hand. Yet, at the time when he was writing, these ideas went against the grain. He wasn’t afraid to push boundaries and question established thinking.

Smith grappled with how to advance well-being and prosperity at a time of great change. The Industrial Revolution was ushering in new technologies that would revolutionize the nature of work, create winners and losers, and potentially transform society. But their impact wasn’t yet clear. The Wealth of Nations, for example, was published the same year James Watt unveiled his steam engine.

Today, we find ourselves at a similar inflection point, where a new technology, generative artificial intelligence, could change our lives in spectacular—and possibly existential—ways. It could even redefine what it means to be human.

Given the parallels between Adam Smith’s time and ours, I’d like to propose a thought experiment: If he were alive today, how would Adam Smith have responded to the emergence of this new “artificial hand”?…(More)”.

OECD Recommendation on Digital Identity


OECD Recommendation: “…Recommends that Adherents prioritise inclusion and minimise barriers to access to and the use of digital identity. To this effect, Adherents should: 

 1. Promote accessibility, affordability, usability, and equity across the digital identity lifecycle in order to increase access to a secure and trusted digital identity solution, including by vulnerable groups and minorities in accordance with their needs; 

2. Take steps to ensure that access to essential services, including those in the public and private sector is not restricted or denied to natural persons who do not want to, or cannot access or use a digital identity solution; 

3. Facilitate inclusive and collaborative stakeholder engagement throughout the design, development, and implementation of digital identity systems, to promote transparency, accountability, and alignment with user needs and expectations; 

4. Raise awareness of the benefits and secure uses of digital identity and the way in which the digital identity system protects users while acknowledging risks and demonstrating the mitigation of potential harms; 

5. Take steps to ensure that support is provided through appropriate channel(s), for those who face challenges in accessing and using digital identity solutions, and identify opportunities to build the skills and capabilities of users; 

6. Monitor, evaluate and publicly report on the effectiveness of the digital identity system, with a focus on inclusiveness and minimising the barriers to the access and use of digital identity…

Recommends that Adherents take a strategic approach to digital identity and define roles and responsibilities across the digital identity ecosystem…(More)”.

Reimagining Digital ID


Report by the World Economic Forum: “For centuries, ID, a way for people to prove attributes about themselves, has played a pivotal role in society. Yet today roughly 850 million people still lack legal ID, making it difficult or impossible for them to fully engage with society. Simultaneously, many of those with ID do not have privacy and control over how their data is shared.

Innovative approaches to digital ID are now being developed that could help expand access while offering individuals control. Decentralized ID, one such approach, could offer a secure way of managing data without depending on intermediaries. While decentralized ID presents opportunities, it also poses risks and faces challenges. Without fit-for-purpose policy, regulation and technology, the potential for these systems to have a socially beneficial impact will be limited.

The result of an international collaboration involving more than 100 experts spanning the public and private sectors, this report provides tools, frameworks and recommendations for government officials, regulators and executives seeking to engage with decentralized ID…(More)”

Digital Freedoms in French-Speaking African Countries


Report by AFD: “As digital penetration increases in countries across the African continent, its citizens face growing risks and challenges. Indeed, beyond facilitated access to knowledge such as the online encyclopedia Wikipedia, to leisure-related tools such as Youtube, and to sociability such as social networks, digital technology offers an unprecedented space for democratic expression. 

However, these online civic spaces are under threat. Several governments have enacted vaguely-defined laws, allowing for random arrests.

Several countries have implemented repressive practices restricting freedom of expression and access to information. This is what is known as “digital authoritarianism”, which is on the rise in many countries.

This report takes stock of digital freedoms in 26 French-speaking African countries, and proposes concrete actions to improve citizen participation and democracy…(More)”

Artificial Intelligence in the COVID-19 Response


Report by Sean Mann, Carl Berdahl, Lawrence Baker, and Federico Girosi: “We conducted a scoping review to identify AI applications used in the clinical and public health response to COVID-19. Interviews with stakeholders early in the research process helped inform our research questions and guide our study design. We conducted a systematic search, screening, and full text review of both academic and gray literature…

  • AI is still an emerging technology in health care, with growing but modest rates of adoption in real-world clinical and public health practice. The COVID-19 pandemic showcased the wide range of clinical and public health functions performed by AI as well as the limited evidence available on most AI products that have entered use.
  • We identified 66 AI applications (full list in Appendix A) used to perform a wide range of diagnostic, prognostic, and treatment functions in the clinical response to COVID-19. This included applications used to analyze lung images, evaluate user-reported symptoms, monitor vital signs, predict infections, and aid in breathing tube placement. Some applications were used by health systems to help allocate scarce resources to patients.
  • Many clinical applications were deployed early in the pandemic, and most were used in the United States, other high-income countries, or China. A few applications were used to care for hundreds of thousands or even millions of patients, although most were used to an unknown or limited extent.
  • We identified 54 AI-based public health applications used in the pandemic response. These included AI-enabled cameras used to monitor health-related behavior and health messaging chatbots used to answer questions about COVID-19. Other applications were used to curate public health information, produce epidemiologic forecasts, or help prioritize communities for vaccine allocation and outreach efforts.
  • We found studies supporting the use of 39 clinical applications and 8 public health applications, although few of these were independent evaluations, and we found no clinical trials evaluating any application’s impact on patient health. We found little evidence available on entire classes of applications, including some used to inform care decisions such as patient deterioration monitors.
  • Further research is needed, particularly independent evaluations on application performance and health impacts in real-world care settings. New guidance may be needed to overcome the unique challenges to evaluating AI application impacts on patient- and population-level health outcomes….(More)” – See also: The #Data4Covid19 Review

How We Ruined The Internet


Paper by Micah Beck and Terry Moore: “At the end of the 19th century the logician C.S. Peirce coined the term “fallibilism” for the “… the doctrine that our knowledge is never absolute but always swims, as it were, in a continuum of uncertainty and of indeterminacy”. In terms of scientific practice, this means we are obliged to reexamine the assumptions, the evidence, and the arguments for conclusions that subsequent experience has cast into doubt. In this paper we examine an assumption that underpinned the development of the Internet architecture, namely that a loosely synchronous point-to-point datagram delivery service could adequately meet the needs of all network applications, including those which deliver content and services to a mass audience at global scale. We examine how the inability of the Networking community to provide a public and affordable mechanism to support such asynchronous point-to-multipoint applications led to the development of private overlay infrastructure, namely CDNs and Cloud networks, whose architecture stands at odds with the Open Data Networking goals of the early Internet advocates. We argue that the contradiction between those initial goals and the monopolistic commercial imperatives of hypergiant overlay infrastructure operators is an important reason for the apparent contradiction posed by the negative impact of their most profitable applications (e.g., social media) and strategies (e.g., targeted advertisement). We propose that, following the prescription of Peirce, we can only resolve this contradiction by reconsidering some of our deeply held assumptions…(More)”.

The History of Rules


Interview with Lorraine Daston: “The rules book began with an everyday observation of the dazzling variety and ubiquity of rules. Every culture has rules, but they’re all different.

I eventually settled on three major meanings of rules: rules as laws, rules as algorithms, and finally, rules as models. The latter meaning was predominant in the Western tradition until the end of the 18th century, and I set out to trace what happened to rules as models, but also the rise of algorithmic rules. It’s hard to imagine now, but the word algorithm didn’t even have an entry in the most comprehensive mathematical encyclopedias of the late 19th century.

To get at these changes over time, I cast my nets very wide. I looked at cookbooks, I looked at the rules of warfare. I looked at rules of games. I looked at rules of monastic orders and traffic regulations, sumptuary regulations, spelling rules, and of course algorithms for how to calculate. And if there’s one take-home message from the book, it is a distinction between thick and thin rules.

Thick rules are rules that come upholstered with all manner of qualifications, examples, caveats, and exceptions. They are rules that are braced to confront a world in which recalcitrant particulars refuse to conform to universals—as opposed to thin rules, of which algorithms are perhaps the best prototype: thin rules are formulated without attention to circumstances. Thin rules brook no quarter, they offer no sense of a variable world. Many bureaucratic rules, especially bureaucratic rules in their Kafkaesque exaggeration, also fit this description.

The arc of the book is not to describe how thick rules became thin rules (because we still have thick and thin rules around us all the time), but rather to determine the point at which thick rules become necessary—when you must anticipate high variability and therefore must tweak your rule to fit circumstances—as opposed to the stable, predictable settings in which we turn to thin rules.

In some historically exceptional cases, thin rules can actually get a job done because the context can be standardized and stabilized…(More)”.

The Metaverse and Homeland Security


Report by Timothy Marler, Zara Fatima Abdurahaman, Benjamin Boudreaux, and Timothy R. Gulden: “The metaverse is an emerging concept and capability supported by multiple underlying emerging technologies, but its meaning and key characteristics can be unclear and will likely change over time. Thus, its relevance to some organizations, such as the U.S. Department of Homeland Security (DHS), can be unclear. This lack of clarity can lead to unmitigated threats and missed opportunities. It can also inhibit healthy public discourse and effective technology management generally. To help address these issues, this Perspective provides an initial review of the metaverse concept and how it might be relevant to DHS. As a critical first step with the analysis of any emerging technology, the authors review current definitions and identify key practical characteristics. Often, regardless of a precise definition, it is the fundamental capabilities that are central to discussion and management. Then, given a foundational understanding of what a metaverse entails, the authors summarize primary goals and relevant needs for DHS. Ultimately, in order to be relevant, technologies must align with actual needs for various organizations or users. By cross-walking exemplary DHS needs that stem from a variety of mission sets with pervasive characteristics of metaverses, the authors demonstrate that metaverses are, in fact, relevant to DHS. Finally, the authors identify specific threats and opportunities that DHS could proactively manage. Although this work focuses the discussion of threats and opportunities on DHS, it has broad implications. This work provides a foundation on which further discussions and research can build, minimizing disparities and discoordination in development and policy…(More)”.

Yes, No, Maybe? Legal & Ethical Considerations for Informed Consent in Data Sharing and Integration


Report by Deja Kemp, Amy Hawn Nelson, & Della Jenkins: “Data sharing and integration are increasingly commonplace at every level of government, as cross-program and cross-sector data provide valuable insights to inform resource allocation, guide program implementation, and evaluate policies. Data sharing, while routine, is not without risks, and clear legal frameworks for data sharing are essential to mitigate those risks, protect privacy, and guide responsible data use. In some cases, federal privacy laws offer clear consent requirements and outline explicit exceptions where consent is not required to share data. In other cases, the law is unclear or silent regarding whether consent is needed for data sharing. Importantly, consent can present both ethical and logistical challenges, particularly when integrating cross-sector data. This brief will frame out key concepts related to consent; explore major federal laws governing the sharing of administrative data, including individually identifiable information; and examine important ethical implications of consent, particularly in cases when the law is silent or unclear. Finally, this brief will outline the foundational role of strong governance and consent frameworks in ensuring ethical data use and offer technical alternatives to consent that may be appropriate for certain data uses….(More)”.

Generative Artificial Intelligence and Data Privacy: A Primer


Report by Congressional Research Service: “Since the public release of Open AI’s ChatGPT, Google’s Bard, and other similar systems, some Members of Congress have expressed interest in the risks associated with “generative artificial intelligence (AI).” Although exact definitions vary, generative AI is a type of AI that can generate new content—such as text, images, and videos—through learning patterns from pre-existing data.
It is a broad term that may include various technologies and techniques from AI and machine learning (ML). Generative AI models have received significant attention and scrutiny due to their potential harms, such as risks involving privacy, misinformation, copyright, and non-consensual sexual imagery. This report focuses on privacy issues and relevant policy considerations for Congress. Some policymakers and stakeholders have raised privacy concerns about how individual data may be used to develop and deploy generative models. These concerns are not new or unique to generative AI, but the scale, scope, and capacity of such technologies may present new privacy challenges for Congress…(More)”.