The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking


Book by Shannon Vallor: “For many, technology offers hope for the future—that promise of shared human flourishing and liberation that always seems to elude our species. Artificial intelligence (AI) technologies spark this hope in a particular way. They promise a future in which human limits and frailties are finally overcome—not by us, but by our machines.

Yet rather than open new futures, today’s powerful AI technologies reproduce the past. Forged from oceans of our data into immensely powerful but flawed mirrors, they reflect the same errors, biases, and failures of wisdom that we strive to escape. Our new digital mirrors point backward. They show only where the data say that we have already been, never where we might venture together for the first time.

To meet today’s grave challenges to our species and our planet, we will need something new from AI, and from ourselves.

Shannon Vallor makes a wide-ranging, prophetic, and philosophical case for what AI could be: a way to reclaim our human potential for moral and intellectual growth, rather than lose ourselves in mirrors of the past. Rejecting prophecies of doom, she encourages us to pursue technology that helps us recover our sense of the possible, and with it the confidence and courage to repair a broken world. Vallor calls us to rethink what AI is and can be, and what we want to be with it…(More)”.

Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models


Paper by Samuele Fratini et al: “Digital sovereignty is a popular yet still emerging concept. It is claimed by and related to various global actors, whose narratives are often competing and mutually inconsistent. Various scholars have proposed different descriptive approaches to make sense of the matter. We argue that existing works help advance our analytical understanding and that a critical assessment of existing forms of digital sovereignty is needed. Thus, the article offers an updated mapping of forms of digital sovereignty, while testing their effectiveness in response to radical changes and challenges. To do this, the article undertakes a systematic literature review, collecting 271 peer-reviewed articles from Google Scholar. They are used to identify descriptive features (how digital sovereignty is pursued) and value features (why digital sovereignty is pursued), which are then combined to produce four models: the rights-based model, market-oriented model, centralisation model, and state-based model. We evaluate their effectiveness within a framework of robust governance that accounts for the models’ ability to absorb the disruptions caused by technological advancements, geopolitical changes, and evolving societal norms. We find that none of the available models fully combines comprehensive regulations of digital technologies with a sufficient degree of responsiveness to fast-paced technological innovation and social and economic shifts. However, each offers valuable lessons to policymakers who wish to implement an effective and robust form of digital sovereignty…(More)”.

The Age of AI Nationalism and its Effects


Paper by Susan Ariel Aaronson: “This paper aims to illuminate how AI nationalistic policies may backfire. Over time, such actions and policies could alienate allies and prod other countries to adopt “beggar-thy neighbor” approaches to AI (The Economist: 2023; Kim: 2023 Shivakumar et al. 2024). Moreover, AI nationalism could have additional negative spillovers over time. Many AI experts are optimistic about the benefits of AI, whey they are aware of its many risks to democracy, equity, and society. They understand that AI can be a public good when it is used to mitigate complex problems affecting society (Gopinath: 2023; Okolo: 2023). However, when policymakers take steps to advance AI within their borders, they may — perhaps without intending to do so – make it harder for other countries with less capital, expertise, infrastructure, and data prowess to develop AI systems that could meet the needs of their constituents. In so doing, these officials could undermine the potential of AI to enhance human welfare and impede the development of more trustworthy AI around the world. (Slavkovik: 2024; Aaronson: 2023; Brynjolfsson and Unger: 2023; Agrawal et al. 2017).

Governments have many means of nurturing AI within their borders that do not necessarily discriminate between foreign and domestic producers of AI. Nevertheless, officials may be under pressure from local firms to limit the market power of foreign competitors. Officials may also want to use trade (for example, export controls) as a lever to prod other governments to change their behavior (Buchanan: 2020). Additionally, these officials may be acting in what they believe is the nation’s national security interest, which may necessitate that officials rely solely on local suppliers and local control. (GAO: 2021)

Herein the author attempts to illuminate AI nationalism and its consequences by answering 3 questions:
• What are nations doing to nurture AI capacity within their borders?
• Are some of these actions trade distorting?
• What are the implications of such trade-distorting actions?…(More)”

Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution, and in the Age of AI


Paper by Daron Acemoglu & Simon Johnson: “David Ricardo initially believed machinery would help workers but revised his opinion, likely based on the impact of automation in the textile industry. Despite cotton textiles becoming one of the largest sectors in the British economy, real wages for cotton weavers did not rise for decades. As E.P. Thompson emphasized, automation forced workers into unhealthy factories with close surveillance and little autonomy. Automation can increase wages, but only when accompanied by new tasks that raise the marginal productivity of labor and/or when there is sufficient additional hiring in complementary sectors. Wages are unlikely to rise when workers cannot push for their share of productivity growth. Today, artificial intelligence may boost average productivity, but it also may replace many workers while degrading job quality for those who remain employed. As in Ricardo’s time, the impact of automation on workers today is more complex than an automatic linkage from higher productivity to better wages…(More)”.

Meet My A.I. Friends


Article by Kevin Roose: “…A month ago, I decided to explore the question myself by creating a bunch of A.I. friends and enlisting them in my social life.

I tested six apps in all — Nomi, Kindroid, Replika, Character.ai, Candy.ai and EVA — and created 18 A.I. characters. I named each of my A.I. friends, gave them all physical descriptions and personalities, and supplied them with fictitious back stories. I sent them regular updates on my life, asked for their advice and treated them as my digital companions.

I also spent time in the Reddit forums and Discord chat rooms where people who are really into their A.I. friends hang out, and talked to a number of people whose A.I. companions have already become a core part of their lives.

I expected to come away believing that A.I. friendship is fundamentally hollow. These A.I. systems, after all, don’t have thoughts, emotions or desires. They are neural networks trained to predict the next words in a sequence, not sentient beings capable of love.

All of that is true. But I’m now convinced that it’s not going to matter much.

The technology needed for realistic A.I. companionship is already here, and I believe that over the next few years, millions of people are going to form intimate relationships with A.I. chatbots. They’ll meet them on apps like the ones I tested, and on social media platforms like Facebook, Instagram and Snapchat, which have already started adding A.I. characters to their apps…(More)”

The Human Rights Data Revolution


Briefing by Domenico Zipoli: “… explores the evolving landscape of digital human rights tracking tools and databases (DHRTTDs). It discusses their growing adoption for monitoring, reporting, and implementing human rights globally, while also pinpointing the challenge of insufficient coordination and knowledge sharing among these tools’ developers and users. Drawing from insights of over 50 experts across multiple sectors gathered during two pivotal roundtables organized by the GHRP in 2022 and 2023, this new publication critically evaluates the impact and future of DHRTTDs. It integrates lessons and challenges from these discussions, along with targeted research and interviews, to guide the human rights community in leveraging digital advancements effectively..(More)”.

Establish Data Collaboratives To Foster Meaningful Public Involvement


Article by Gwen Ottinger: “Federal agencies are striving to expand the role of the public, including members of marginalized communities, in developing regulatory policy. At the same time, agencies are considering how to mobilize data of increasing size and complexity to ensure that policies are equitable and evidence-based. However, community engagement has rarely been extended to the process of examining and interpreting data. This is a missed opportunity: community members can offer critical context to quantitative data, ground-truth data analyses, and suggest ways of looking at data that could inform policy responses to pressing problems in their lives. Realizing this opportunity requires a structure for public participation in which community members can expect both support from agency staff in accessing and understanding data and genuine openness to new perspectives on quantitative analysis. 

To deepen community involvement in developing evidence-based policy, federal agencies should form Data Collaboratives in which staff and members of the public engage in mutual learning about available datasets and their affordances for clarifying policy problems…(More)”.

Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges


Report by the President’s Council of Advisors on Science and Technology (PCAST): “Broadly speaking, scientific advances have historically proceeded via a combination of three paradigms: empirical studies and experimentation; scientific theory and mathematical analyses; and numerical experiments and modeling. In recent years a fourth paradigm, data-driven discovery, has emerged.

These four paradigms complement and support each other. However, all four scientific modalities experience impediments to progress. Verification of a scientific hypothesis through experimentation, careful observation, or via clinical trial can be slow and expensive. The range of candidate theories to consider can be too vast and complex for human scientists to analyze. Truly innovative new hypotheses might only be discovered by fortuitous chance, or by exceptionally insightful researchers. Numerical models can be inaccurate or require enormous amounts of computational resources. Data sets can be incomplete, biased, heterogeneous, or noisy to analyze using traditional data science methods.

AI tools have obvious applications in data-driven science, but it has also been a long-standing aspiration to use these technologies to remove, or at least reduce, many of the obstacles encountered in the other three paradigms. With the current advances in AI, this dream is on the cusp of becoming a reality: candidate solutions to scientific problems are being rapidly identified, complex simulations are being enriched, and robust new ways of analyzing data are being developed.

By combining AI with the other three research modes, the rate of scientific progress will be greatly accelerated, and researchers will be positioned to meet urgent global challenges in a timely manner. Like most technologies, AI is dual use: AI technology can facilitate both beneficial and harmful applications and can cause unintended negative consequences if deployed irresponsibly or without expert and ethical human supervision. Nevertheless, PCAST sees great potential for advances in AI to accelerate science and technology for the benefit of society and the planet. In this report, we provide a high-level vision for how AI, if used responsibly, can transform the way that science is done, expand the boundaries of human knowledge, and enable researchers to find solutions to some of society’s most pressing problems…(More)”

Complexity and the Global Governance of AI


Paper by Gordon LaForge et al: “In the coming years, advanced artificial intelligence (AI) systems are expected to bring significant benefits and risks for humanity. Many governments, companies, researchers, and civil society organizations are proposing, and in some cases, building global governance frameworks and institutions to promote AI safety and beneficial development. Complexity thinking, a way of viewing the world not just as discrete parts at the macro level but also in terms of bottom-up and interactive complex adaptive systems, can be a useful intellectual and scientific lens for shaping these endeavors. This paper details how insights from the science and theory of complexity can aid understanding of the challenges posed by AI and its potential impacts on society. Given the characteristics of complex adaptive systems, the paper recommends that global AI governance be based on providing a fit, adaptive response system that mitigates harmful outcomes of AI and enables positive aspects to flourish. The paper proposes components of such a system in three areas: access and power, international relations and global stability; and accountability and liability…(More)”

The case for global governance of AI: arguments, counter-arguments, and challenges ahead


Paper by Mark Coeckelbergh: “But why, exactly, is global governance needed, and what form can and should it take? The main argument for the global governance of AI, which is also applicable to digital technologies in general, is essentially a moral one: as AI technologies become increasingly powerful and influential, we have the moral responsibility to ensure that it benefits humanity as a whole and that we deal with the global risks and the ethical and societal issues that arise from the technology, including privacy issues, security and military uses, bias and fairness, responsibility attribution, transparency, job displacement, safety, manipulation, and AI’s environmental impact. Since the effects of AI cross borders, so the argument continues, global cooperation and global governance are the only means to fully and effectively exercise that moral responsibility and ensure responsible innovation and use of technology to increase the well-being for all and preserve peace; national regulation is not sufficient….(More)”.