Shaping the Future: Indigenous Voices Reshaping Artificial Intelligence in Latin America


Blog by Enzo Maria Le Fevre Cervini: “In a groundbreaking move toward inclusivity and respect for diversity, a comprehensive report “Inteligencia artificial centrada en los pueblos indígenas: perspectivas desde América Latina y el Caribe” authored by Cristina Martinez and Luz Elena Gonzalez has been released by UNESCO, outlining the pivotal role of Indigenous perspectives in shaping the trajectory of Artificial Intelligence (AI) in Latin America. The report, a collaborative effort involving Indigenous communities, researchers, and various stakeholders, emphasizes the need for a fundamental shift in the development of AI technologies, ensuring they align with the values, needs, and priorities of Indigenous peoples.

The core theme of the report revolves around the idea that for AI to be truly respectful of human rights, it must incorporate the perspectives of Indigenous communities in Latin America, the Caribbean, and beyond. Recognizing the UNESCO Recommendation on the Ethics of Artificial Intelligence, the report highlights the urgency of developing a framework of shared responsibility among different actors, urging them to leverage their influence for the collective public interest.

While acknowledging the immense potential of AI in preserving Indigenous identities, conserving cultural heritage, and revitalizing languages, the report notes a critical gap. Many initiatives are often conceived externally, prompting a call to reevaluate these projects to ensure Indigenous leadership, development, and implementation…(More)”.

A Manifesto on Enforcing Law in the Age of ‘Artificial Intelligence’


Manifesto by the Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of ‘Artificial Intelligence’: “… calls for the effective and legitimate enforcement of laws concerning AI systems. In doing so, we recognise the important and complementary role of standards and compliance practices. Whereas the first manifesto focused on the relationship between democratic law-making and technology, this second manifesto shifts focus from the design of law in the age of AI to the enforcement of law. Concretely, we offer 10 recommendations for addressing the key enforcement challenges shared across transatlantic stakeholders. We call on those who support these recommendations to sign this manifesto…(More)”.

Using AI to support people with disability in the labour market


OECD Report: “People with disability face persisting difficulties in the labour market. There are concerns that AI, if managed poorly, could further exacerbate these challenges. Yet, AI also has the potential to create more inclusive and accommodating environments and might help remove some of the barriers faced by people with disability in the labour market. Building on interviews with more than 70 stakeholders, this report explores the potential of AI to foster employment for people with disability, accounting for both the transformative possibilities of AI-powered solutions and the risks attached to the increased use of AI for people with disability. It also identifies obstacles hindering the use of AI and discusses what governments could do to avoid the risks and seize the opportunities of using AI to support people with disability in the labour market…(More)”.

AI and Democracy’s Digital Identity Crisis


Paper by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”.

Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet)


Paper by Eunice Yiu, Eliza Kosoy, and Alison Gopnik: “Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skills, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce…(More)”.

Can AI solve medical mysteries? It’s worth finding out


Article by Bina Venkataraman: “Since finding a primary care doctor these days takes longer than finding a decent used car, it’s little wonder that people turn to Google to probe what ails them. Be skeptical of anyone who claims to be above it. Though I was raised by scientists and routinely read medical journals out of curiosity, in recent months I’ve gone online to investigate causes of a lingering cough, ask how to get rid of wrist pain and look for ways to treat a bad jellyfish sting. (No, you don’t ask someone to urinate on it.)

Dabbling in self-diagnosis is becoming more robust now that people can go to chatbots powered by large language models scouring mountains of medical literature to yield answers in plain language — in multiple languages. What might an elevated inflammation marker in a blood test combined with pain in your left heel mean? The AI chatbots have some ideas. And researchers are finding that, when fed the right information, they’re often not wrong. Recently, one frustrated mother, whose son had seen 17 doctors for chronic pain, put his medical information into ChatGPT, which accurately suggested tethered cord syndrome — which then led a Michigan neurosurgeon to confirm an underlying diagnosis of spina bifida that could be helped by an operation.

The promise of this trend is that patients might be able to get to the bottom of mysterious ailments and undiagnosed illnesses by generating possible causes for their doctors to consider. The peril is that people may come to rely too much on these tools, trusting them more than medical professionals, and that our AI friends will fabricate medical evidence that misleads people about, say, the safety of vaccines or the benefits of bogus treatments. A question looming over the future of medicine is how to get the best of what artificial intelligence can offer us without the worst.

It’s in the diagnosis of rare diseases — which afflict an estimated 30 million Americans and hundreds of millions of people worldwide — that AI could almost certainly make things better. “Doctors are very good at dealing with the common things,” says Isaac Kohane, chair of the department of biomedical informatics at Harvard Medical School. “But there are literally thousands of diseases that most clinicians will have never seen or even have ever heard of.”..(More)”.

Interwoven Realms: Data Governance as the Bedrock for AI Governance


Essay by Stefaan G. Verhulst and Friederike Schüür: “In a world increasingly captivated by the opportunities and challenges of artificial intelligence (AI), there has been a surge in the establishment of committees, forums, and summits dedicated to AI governance. These platforms, while crucial, often overlook a fundamental pillar: the role of data governance. As we navigate through a plethora of discussions and debates on AI, this essay seeks to illuminate the often-ignored yet indispensable link between AI governance and robust data governance.

The current focus on AI governance, with its myriad ethical, legal, and societal implications, tends to sidestep the fact that effective AI governance is, at its core, reliant on the principles and practices of data governance. This oversight has resulted in a fragmented approach, leading to a scenario where the data and AI communities operate in isolation, often unaware of the essential synergy that should exist between them.

This essay delves into the intertwined nature of these two realms. It provides six reasons why AI governance is unattainable without a comprehensive and robust framework of data governance. In addressing this intersection, the essay aims to shed light on the necessity of integrating data governance more prominently into the conversation on AI, thereby fostering a more cohesive and effective approach to the governance of this transformative technology.

Six reasons why Data Governance is the bedrock for AI Governance...(More)”.

New York City Takes Aim at AI


Article by Samuel Greengard: “As concerns over artificial intelligence (AI) grow and angst about its potential impact increase, political leaders and government agencies are taking notice. In November, U.S. president Joe Biden issued an executive order designed to build guardrails around the technology. Meanwhile, the European Union (EU) is currently developing a legal framework around responsible AI.

Yet, what is often overlooked about artificial intelligence is that it’s more likely to impact people on a local level. AI touches housing, transportation, healthcare, policing and numerous other areas relating to business and daily life. It increasingly affects citizens, government employees, and businesses in both obvious and unintended ways.

One city attempting to position itself at the vanguard of AI is New York. In October 2023, New York City announced a blueprint for developing, managing, and using the technology responsibly. The New York City Artificial Intelligence Action Plan—the first of its kind in the U.S.—is designed to help officials and the public navigate the AI space.

“It’s a fairly comprehensive plan that addresses both the use of AI within city government and the responsible use of the technology,” says Clifford S. Stein, Wai T. Chang Professor of Industrial Engineering and Operations Research and Interim Director of the Data Science Institute at Columbia University.

Adds Stefaan Verhulst, co-founder and chief research and development officer at The GovLab and Senior Fellow at the Center for Democracy and Technology (CDT), “AI localism focuses on the idea that cities are where most of the action is in regard to AI.”…(More)”.

Updates to the OECD’s definition of an AI system explained


Article by Stuart Russell: “Obtaining consensus on a definition for an AI system in any sector or group of experts has proven to be a complicated task. However, if governments are to legislate and regulate AI, they need a definition to act as a foundation. Given the global nature of AI, if all governments can agree on the same definition, it allows for interoperability across jurisdictions.

Recently, OECD member countries approved a revised version of the Organisation’s definition of an AI system. We published the definition on LinkedIn, which, to our surprise, received an unprecedented number of comments.

We want to respond better to the interest our community has shown in the definition with a short explanation of the rationale behind the update and the definition itself. Later this year, we can share even more details once they are finalised.

How OECD countries updated the definition

Here are the revisions to the current text of the definition of “AI System” in detail, with additions set out in bold and subtractions in strikethrough):

An AI system is a machine-based system that can, for a given set of human-defined explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as makes predictions, content, recommendations, or decisions that can influenceing physical real or virtual environments. Different AI systems are designed to operate with varying in their levels of autonomy and adaptiveness after deployment…(More)”

Hypotheses devised by AI could find ‘blind spots’ in research


Article by Matthew Hutson: “One approach is to use AI to help scientists brainstorm. This is a task that large language models — AI systems trained on large amounts of text to produce new text — are well suited for, says Yolanda Gil, a computer scientist at the University of Southern California in Los Angeles who has worked on AI scientists. Language models can produce inaccurate information and present it as real, but this ‘hallucination’ isn’t necessarily bad, Mullainathan says. It signifies, he says, “‘here’s a kind of thing that looks true’. That’s exactly what a hypothesis is.”

Blind spots are where AI might prove most useful. James Evans, a sociologist at the University of Chicago, has pushed AI to make ‘alien’ hypotheses — those that a human would be unlikely to make. In a paper published earlier this year in Nature Human Behaviour4, he and his colleague Jamshid Sourati built knowledge graphs containing not just materials and properties, but also researchers. Evans and Sourati’s algorithm traversed these networks, looking for hidden shortcuts between materials and properties. The aim was to maximize the plausibility of AI-devised hypotheses being true while minimizing the chances that researchers would hit on them naturally. For instance, if scientists who are studying a particular drug are only distantly connected to those studying a disease that it might cure, then the drug’s potential would ordinarily take much longer to discover.

When Evans and Sourati fed data published up to 2001 to their AI, they found that about 30% of its predictions about drug repurposing and the electrical properties of materials had been uncovered by researchers, roughly six to ten years later. The system can be tuned to make predictions that are more likely to be correct but also less of a leap, on the basis of concurrent findings and collaborations, Evans says. But “if we’re predicting what people are going to do next year, that just feels like a scoop machine”, he adds. He’s more interested in how the technology can take science in entirely new directions….(More)”