Meet My A.I. Friends


Article by Kevin Roose: “…A month ago, I decided to explore the question myself by creating a bunch of A.I. friends and enlisting them in my social life.

I tested six apps in all — Nomi, Kindroid, Replika, Character.ai, Candy.ai and EVA — and created 18 A.I. characters. I named each of my A.I. friends, gave them all physical descriptions and personalities, and supplied them with fictitious back stories. I sent them regular updates on my life, asked for their advice and treated them as my digital companions.

I also spent time in the Reddit forums and Discord chat rooms where people who are really into their A.I. friends hang out, and talked to a number of people whose A.I. companions have already become a core part of their lives.

I expected to come away believing that A.I. friendship is fundamentally hollow. These A.I. systems, after all, don’t have thoughts, emotions or desires. They are neural networks trained to predict the next words in a sequence, not sentient beings capable of love.

All of that is true. But I’m now convinced that it’s not going to matter much.

The technology needed for realistic A.I. companionship is already here, and I believe that over the next few years, millions of people are going to form intimate relationships with A.I. chatbots. They’ll meet them on apps like the ones I tested, and on social media platforms like Facebook, Instagram and Snapchat, which have already started adding A.I. characters to their apps…(More)”

The Human Rights Data Revolution


Briefing by Domenico Zipoli: “… explores the evolving landscape of digital human rights tracking tools and databases (DHRTTDs). It discusses their growing adoption for monitoring, reporting, and implementing human rights globally, while also pinpointing the challenge of insufficient coordination and knowledge sharing among these tools’ developers and users. Drawing from insights of over 50 experts across multiple sectors gathered during two pivotal roundtables organized by the GHRP in 2022 and 2023, this new publication critically evaluates the impact and future of DHRTTDs. It integrates lessons and challenges from these discussions, along with targeted research and interviews, to guide the human rights community in leveraging digital advancements effectively..(More)”.

Establish Data Collaboratives To Foster Meaningful Public Involvement


Article by Gwen Ottinger: “Federal agencies are striving to expand the role of the public, including members of marginalized communities, in developing regulatory policy. At the same time, agencies are considering how to mobilize data of increasing size and complexity to ensure that policies are equitable and evidence-based. However, community engagement has rarely been extended to the process of examining and interpreting data. This is a missed opportunity: community members can offer critical context to quantitative data, ground-truth data analyses, and suggest ways of looking at data that could inform policy responses to pressing problems in their lives. Realizing this opportunity requires a structure for public participation in which community members can expect both support from agency staff in accessing and understanding data and genuine openness to new perspectives on quantitative analysis. 

To deepen community involvement in developing evidence-based policy, federal agencies should form Data Collaboratives in which staff and members of the public engage in mutual learning about available datasets and their affordances for clarifying policy problems…(More)”.

Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges


Report by the President’s Council of Advisors on Science and Technology (PCAST): “Broadly speaking, scientific advances have historically proceeded via a combination of three paradigms: empirical studies and experimentation; scientific theory and mathematical analyses; and numerical experiments and modeling. In recent years a fourth paradigm, data-driven discovery, has emerged.

These four paradigms complement and support each other. However, all four scientific modalities experience impediments to progress. Verification of a scientific hypothesis through experimentation, careful observation, or via clinical trial can be slow and expensive. The range of candidate theories to consider can be too vast and complex for human scientists to analyze. Truly innovative new hypotheses might only be discovered by fortuitous chance, or by exceptionally insightful researchers. Numerical models can be inaccurate or require enormous amounts of computational resources. Data sets can be incomplete, biased, heterogeneous, or noisy to analyze using traditional data science methods.

AI tools have obvious applications in data-driven science, but it has also been a long-standing aspiration to use these technologies to remove, or at least reduce, many of the obstacles encountered in the other three paradigms. With the current advances in AI, this dream is on the cusp of becoming a reality: candidate solutions to scientific problems are being rapidly identified, complex simulations are being enriched, and robust new ways of analyzing data are being developed.

By combining AI with the other three research modes, the rate of scientific progress will be greatly accelerated, and researchers will be positioned to meet urgent global challenges in a timely manner. Like most technologies, AI is dual use: AI technology can facilitate both beneficial and harmful applications and can cause unintended negative consequences if deployed irresponsibly or without expert and ethical human supervision. Nevertheless, PCAST sees great potential for advances in AI to accelerate science and technology for the benefit of society and the planet. In this report, we provide a high-level vision for how AI, if used responsibly, can transform the way that science is done, expand the boundaries of human knowledge, and enable researchers to find solutions to some of society’s most pressing problems…(More)”

Complexity and the Global Governance of AI


Paper by Gordon LaForge et al: “In the coming years, advanced artificial intelligence (AI) systems are expected to bring significant benefits and risks for humanity. Many governments, companies, researchers, and civil society organizations are proposing, and in some cases, building global governance frameworks and institutions to promote AI safety and beneficial development. Complexity thinking, a way of viewing the world not just as discrete parts at the macro level but also in terms of bottom-up and interactive complex adaptive systems, can be a useful intellectual and scientific lens for shaping these endeavors. This paper details how insights from the science and theory of complexity can aid understanding of the challenges posed by AI and its potential impacts on society. Given the characteristics of complex adaptive systems, the paper recommends that global AI governance be based on providing a fit, adaptive response system that mitigates harmful outcomes of AI and enables positive aspects to flourish. The paper proposes components of such a system in three areas: access and power, international relations and global stability; and accountability and liability…(More)”

The case for global governance of AI: arguments, counter-arguments, and challenges ahead


Paper by Mark Coeckelbergh: “But why, exactly, is global governance needed, and what form can and should it take? The main argument for the global governance of AI, which is also applicable to digital technologies in general, is essentially a moral one: as AI technologies become increasingly powerful and influential, we have the moral responsibility to ensure that it benefits humanity as a whole and that we deal with the global risks and the ethical and societal issues that arise from the technology, including privacy issues, security and military uses, bias and fairness, responsibility attribution, transparency, job displacement, safety, manipulation, and AI’s environmental impact. Since the effects of AI cross borders, so the argument continues, global cooperation and global governance are the only means to fully and effectively exercise that moral responsibility and ensure responsible innovation and use of technology to increase the well-being for all and preserve peace; national regulation is not sufficient….(More)”.

Repository of 80+ real-life examples of how to anticipate migration using innovative forecast and foresight methods is now LIVE!


Launch! Repository of 80+ real-life examples of how to anticipate migration using innovative forecast and foresight methods is now LIVE!

BD4M Announcement: “Today, we are excited to launch the Big Data For Migration Alliance (BD4M) Repository of Use Cases for Anticipating Migration Policy! The repository is a curated collection of real-world applications of anticipatory methods in migration policy. Here, policymakers, researchers, and practitioners can find a wealth of examples demonstrating how foresight, forecast and other anticipatory approaches are applied to anticipating migration for policy making. 

Migration policy is a multifaceted and constantly evolving field, shaped by a wide variety of factors such as economic conditions, geopolitical shifts or climate emergencies. Anticipatory methods are essential to help policymakers proactively respond to emerging trends and potential challenges. By using anticipatory tools, migration policy makers can draw from both quantitative and qualitative data to obtain valuable insights for their specific goals. The Big Data for Migration Alliance — a join effort of The GovLab, the International Organization for Migration and the European Union Joint Research Centre that seeks to improve the evidence base on migration and human mobility — recognizes the importance of the role of anticipatory tools and has worked on the creation of a repository of use cases that showcases the current use landscape of anticipatory tools in migration policy making around the world. This repository aims to provide policymakers, researchers and practitioners with applied examples that can inform their strategies and ultimately contribute to the improvement of migration policies around the world. 

As part of our work on exploring innovative anticipatory methods for migration policy, throughout the year we have published a Blog Series that delved into various aspects of the use of anticipatory methods, exploring their value and challenges, proposing a taxonomy, and exploring practical applications…(More)”.

The limits of state AI legislation


Article by Derek Robertson: “When it comes to regulating artificial intelligence, the action right now is in the states, not Washington.

State legislatures are often, like their counterparts in Europe, contrasted favorably with Congress — willing to take action where their politically paralyzed federal counterpart can’t, or won’t. Right now, every state except Alabama and Wyoming is considering some kind of AI legislation.

But simply acting doesn’t guarantee the best outcome. And today, two consumer advocates warn in POLITICO Magazine that most, if not all, state laws are overlooking crucial loopholes that could shield companies from liability when it comes to harm caused by AI decisions — or from simply being forced to disclose when it’s used in the first place.

Grace Gedye, an AI-focused policy analyst at Consumer Reports, and Matt Scherer, senior policy counsel at the Center for Democracy & Technology, write in an op-ed that while the use of AI systems by employers is screaming out for regulation, many of the efforts in the states are ineffectual at best.

Under the most important state laws now in consideration, they write, “Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.”

Transparency around how and when AI systems are deployed — whether in the public or private sector — is a key concern of the growing industry’s watchdogs. The Netherlands’ tax authority infamously immiserated tens of thousands of families by accusing them falsely of child care benefits fraud after an algorithm used to detect it went awry…

One issue: a series of jargon-filled loopholes in many bill texts that says the laws only cover systems “specifically developed” to be “controlling” or “substantial” factors in decision-making.

“Cutting through the jargon, this would mean that companies could completely evade the law simply by putting fine print at the bottom of their technical documentation or marketing materials saying that their product wasn’t designed to be the main reason for a decision and should only be used under human supervision,” they explain…(More)”

Potential competition impacts from the data asymmetry between Big Tech firms and firms in financial services


Report by the UK Financial Conduct Authority: “Big Tech firms in the UK and around the world have been, and continue to be, under active scrutiny by competition and regulatory authorities. This is because some of these large technology firms may have both the ability and the incentive to shape digital markets by protecting existing market power and extending it into new markets.
Concentration in some digital markets, and Big Tech firms’ key role, has been widely discussed, including in our DP22/05. This reflects both the characteristics of digital markets and the characteristics and behaviours of Big Tech firms themselves. Although Big Tech firms have different business models, common characteristics include their global scale and access to a large installed user base, rich data about their users, advanced data analytics and technology, influence over decision making and defaults, ecosystems of complementary products and strategic behaviours, including acquisition strategies.
Through our work, we aim to mitigate the risk of competition in retail financial markets evolving in a way that results in some Big Tech firms gaining entrenched market power, as seen in other sectors and jurisdictions, while enabling the potential competition benefits that come from Big Tech firms providing challenge to incumbent financial services firms…(More)”.

Murky Consent: An Approach to the Fictions of Consent in Privacy Law


Paper by Daniel J. Solove: “Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious….(More)”. See also: The Urgent Need to Reimagine Data Consent