The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis


Article by David Gilbert: “…The application of artificial intelligence technologies to conflict situations has been around since at least 1996, with machine learning being used to predict where conflicts may occur. The use of AI in this area has expanded in the intervening years, being used to improve logistics, training, and other aspects of peacekeeping missions. Lane and Shults believe they could use artificial intelligence to dig deeper and find the root causes of conflicts.

Their idea for an AI program that models the belief systems that drive human behavior first began when Lane moved to Northern Ireland a decade ago to study whether computation modeling and cognition could be used to understand issues around religious violence.

In Belfast, Lane figured out that by modeling aspects of identity and social cohesion, and identifying the factors that make people motivated to fight and die for a particular cause, he could accurately predict what was going to happen next.

“We set out to try and come up with something that could help us better understand what it is about human nature that sometimes results in conflict, and then how can we use that tool to try and get a better handle or understanding on these deeper, more psychological issues at really large scales,” Lane says.

The result of their work was a study published in 2018 in The Journal for Artificial Societies and Social Simulation, which found that people are typically peaceful but will engage in violence when an outside group threatens the core principles of their religious identity.

A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.

The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that ​​monitors “the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research…(More)”.

AI Globalism and AI Localism: Governing AI at the Local Level for Global Benefit


Article by Stefaan G. Verhulst: “With the UK Summit in full swing, 2023 will likely be seen as a pivotal year for AI governance, with governments promoting a global governance model: AI Globalism. For it to be relevant, flexible, and effective, any global approach will need to be informed by and complemented with local experimentation and leadership, ensuring local responsiveness: AI Localism.

Even as consumers and businesses extend their use of AI (generative AI in particular), governments are also taking notice. Determined not to be caught on the back foot, as they were with social media, regulators and policymakers around the world are exploring frameworks and institutional structures that could help maximize the benefits while minimizing the potential harms of AI. This week, the UK is hosting a high-profile AI Safety Summit, attended by political and business leaders from around the world, including Kamala Harris and Elon Musk. Similarly, US President Biden recently signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which he hailed as a “landmark executive order” to ensure “safety, security, trust, openness, and American leadership.”

Generated with DALL-E

Amid the various policy and regulatory proposals swirling around, there has been a notable emphasis on what we might call AI globalism. The UK summit has explicitly endorsed a global approach to AI safety, with coordination between the US, EU, and China core to its vision of more responsible and safe AI. This global perspective follows similar recent calls for “an AI equivalent of the IPCC” or the International Atomic Energy Agency (IAEA). Notably, such calls are emerging both from the private sector and from civil society leaders.

In many ways, a global approach makes sense. Like most technology, AI is transnational in scope, and its governance will require cross-jurisdictional coordination and harmonization. At the same time, we believe that AI globalism should be accompanied by a recognition that some of the most innovative AI initiatives are taking place in cities and municipalities and being regulated at those levels too.

We call it AI localism. In what follows, I outline a vision of a more decentralized approach to AI governance, one that would allow cities and local jurisdictions — including states — to develop and iterate governance frameworks tailored to their specific needs and challenges. This decentralized, local approach would need to take place alongside global efforts. The two would not be mutually exclusive but instead necessarily complementary…(More)”.

AI in public services will require empathy, accountability


Article by Yogesh Hirdaramani: “The Australian Government Department of the Prime Minister and Cabinet has released the first of its Long Term Insights Briefing, which focuses on how the Government can integrate artificial intelligence (AI) into public services while maintaining the trustworthiness of public service delivery.

Public servants need to remain accountable and transparent with their use of AI, continue to demonstrate empathy for the people they serve, use AI to better meet people’s needs, and build AI literacy amongst the Australian public, the report stated.

The report also cited a forthcoming study that found that Australian residents with a deeper understanding of AI are more likely to trust the Government’s use of AI in service delivery. However,more than half of survey respondents reported having little knowledge of AI.

Key takeaways

The report aims to supplement current policy work on how AI can be best governed in the public service to realise its benefits while maintaining public trust.

In the longer term, the Australian Government aims to use AI to deliver personalised services to its citizens, deliver services more efficiently and conveniently, and achieve a higher standard of care for its ageing population.

AI can help public servants achieve these goals through automating processes, improving service processing and response time, and providing AI-enabled interfaces which users can engage with, such as chatbots and virtual assistants.

However, AI can also lead to unfair or unintended outcomes due to bias in training data or hallucinations, the report noted.

According to the report, the trustworthy use of AI will require public servants to:

  1. Demonstrate integrity by remaining accountable for AI outcomes and transparent about AI use
  2. Demonstrate empathy by offering face-to-face services for those with greater vulnerabilities 
  3. Use AI in ways that improve service delivery for end-users
  4. Build internal skills and systems to implement AI, while educating the public on the impact of AI

The Australian Taxation Office currently uses AI to identify high-risk business activity statements to determine whether refunds can be issued or if further review is required, noted the report. Taxpayers can appeal the decision if staff decide to deny refunds…(More)”

The Tragedy of AI Governance


Paper by Simon Chesterman: “Despite hundreds of guides, frameworks, and principles intended to make AI “ethical” or “responsible”, ever more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.

This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomized by the climate crisis and now frustrating efforts to govern a technology that does not respect borders. The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.

Resolving these challenges either requires rethinking the incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus…(More)”

Your Face Belongs to Us


Book by Kashmir Hill: “… was skeptical when she got a tip about a mysterious app called Clearview AI that claimed it could, with 99 percent accuracy, identify anyone based on just one snapshot of their face. The app could supposedly scan a face and, in just seconds, surface every detail of a person’s online life: their name, social media profiles, friends and family members, home address, and photos that they might not have even known existed. If it was everything it claimed to be, it would be the ultimate surveillance tool, and it would open the door to everything from stalking to totalitarian state control. Could it be true?

In this riveting account, Hill tracks the improbable rise of Clearview AI, helmed by Hoan Ton-That, an Australian computer engineer, and Richard Schwartz, a former Rudy Giuliani advisor, and its astounding collection of billions of faces from the internet. The company was boosted by a cast of controversial characters, including conservative provocateur Charles C. Johnson and billionaire Donald Trump backer Peter Thiel—who all seemed eager to release this society-altering technology on the public. Google and Facebook decided that a tool to identify strangers was too radical to release, but Clearview forged ahead, sharing the app with private investors, pitching it to businesses, and offering it to thousands of law enforcement agencies around the world.
      
Facial recognition technology has been quietly growing more powerful for decades. This technology has already been used in wrongful arrests in the United States. Unregulated, it could expand the reach of policing, as it has in China and Russia, to a terrifying, dystopian level.
     
Your Face Belongs to Us
 is a gripping true story about the rise of a technological superpower and an urgent warning that, in the absence of vigilance and government regulation, Clearview AI is one of many new technologies that challenge what Supreme Court Justice Louis Brandeis once called “the right to be let alone.”…(More)”.

Choosing AI’s Impact on the Future of Work 


Article by Daron Acemoglu & Simon Johnson  …“Too many commentators see the path of technology as inevitable. But the historical record is clear: technologies develop according to the vision and choices of those in positions of power. As we document in Power and Progress: Our 1,000-Year Struggle over Technology and Prosperity, when these choices are left entirely in the hands of a small elite, you should expect that group to receive most of the benefits, while everyone else bears the costs—potentially for a long time.

Rapid advances in AI threaten to eliminate many jobs, and not just those of writers and actors. Jobs with routine elements, such as in regulatory compliance or clerical work, and those that involve simple data collection, data summary, and writing tasks are likely to disappear.

But there are still two distinct paths that this AI revolution could take. One is the path of automation, based on the idea that AI’s role is to perform tasks as well as or better than people. Currently, this vision dominates in the US tech sector, where Microsoft and Google (and their ecosystems) are cranking hard to create new AI applications that can take over as many human tasks as possible.

The negative impact on people along the “just automate” path is easy to predict from prior waves of digital technologies and robotics. It was these earlier forms of automation that contributed to the decline of American manufacturing employment and the huge increase in inequality over the last four decades. If AI intensifies automation, we are very likely to get more of the same—a gap between capital and labor, more inequality between the professional class and the rest of the workers, and fewer good jobs in the economy….(More)”

Automating Empathy 


Open Access Book by Andrew McStay: “We live in a world where artificial intelligence and intensive use of personal data has become normalized. Companies across the world are developing and launching technologies to infer and interact with emotions, mental states, and human conditions. However, the methods and means of mediating information about people and their emotional states are incomplete and problematic.

Automating Empathy offers a critical exploration of technologies that sense intimate dimensions of human life and the modern ethical questions raised by attempts to perform and simulate empathy. It traces the ascendance of empathic technologies from their origins in physiognomy and pathognomy to the modern day and explores technologies in nations with non-Western ethical histories and approaches to emotion, such as Japan. The book examines applications of empathic technologies across sectors such as education, policing, and transportation, and considers key questions of everyday use such as the integration of human-state sensing in mixed reality, the use of neurotechnologies, and the moral limits of using data gleaned through automated empathy. Ultimately, Automating Empathy outlines the key principles necessary to usher in a future where automated empathy can serve and do good…(More)”

Data Equity: Foundational Concepts for Generative AI


WEF Report: “This briefing paper focuses on data equity within foundation models, both in terms of the impact of Generative AI (genAI) on society and on the further development of genAI tools.

GenAI promises immense potential to drive digital and social innovation, such as improving efficiency, enhancing creativity and augmenting existing data. GenAI has the potential to democratize access and usage of technologies. However, left unchecked, it could deepen inequities. With the advent of genAI significantly increasing the rate at which AI is deployed and developed, exploring frameworks for data equity is more urgent than ever.

The goals of the briefing paper are threefold: to establish a shared vocabulary to facilitate collaboration and dialogue; to scope initial concerns to establish a framework for inquiry on which stakeholders can focus; and to shape future development of promising technologies.

The paper represents a first step in exploring and promoting data equity in the context of genAI. The proposed definitions, framework and recommendations are intended to proactively shape the development of promising genAI technologies…(More)”.

Artificial intelligence in government: Concepts, standards, and a unified framework


Paper by Vincent J. Straub, Deborah Morgan, Jonathan Bright, Helen Margetts: “Recent advances in artificial intelligence (AI), especially in generative language modelling, hold the promise of transforming government. Given the advanced capabilities of new AI systems, it is critical that these are embedded using standard operational procedures, clear epistemic criteria, and behave in alignment with the normative expectations of society. Scholars in multiple domains have subsequently begun to conceptualize the different forms that AI applications may take, highlighting both their potential benefits and pitfalls. However, the literature remains fragmented, with researchers in social science disciplines like public administration and political science, and the fast-moving fields of AI, ML, and robotics, all developing concepts in relative isolation. Although there are calls to formalize the emerging study of AI in government, a balanced account that captures the full depth of theoretical perspectives needed to understand the consequences of embedding AI into a public sector context is lacking. Here, we unify efforts across social and technical disciplines by first conducting an integrative literature review to identify and cluster 69 key terms that frequently co-occur in the multidisciplinary study of AI. We then build on the results of this bibliometric analysis to propose three new multifaceted concepts for understanding and analysing AI-based systems for government (AI-GOV) in a more unified way: (1) operational fitness, (2) epistemic alignment, and (3) normative divergence. Finally, we put these concepts to work by using them as dimensions in a conceptual typology of AI-GOV and connecting each with emerging AI technical measurement standards to encourage operationalization, foster cross-disciplinary dialogue, and stimulate debate among those aiming to rethink government with AI…(More)”.

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence


The White House: “Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI…(More)”.