AI in public services will require empathy, accountability


Article by Yogesh Hirdaramani: “The Australian Government Department of the Prime Minister and Cabinet has released the first of its Long Term Insights Briefing, which focuses on how the Government can integrate artificial intelligence (AI) into public services while maintaining the trustworthiness of public service delivery.

Public servants need to remain accountable and transparent with their use of AI, continue to demonstrate empathy for the people they serve, use AI to better meet people’s needs, and build AI literacy amongst the Australian public, the report stated.

The report also cited a forthcoming study that found that Australian residents with a deeper understanding of AI are more likely to trust the Government’s use of AI in service delivery. However,more than half of survey respondents reported having little knowledge of AI.

Key takeaways

The report aims to supplement current policy work on how AI can be best governed in the public service to realise its benefits while maintaining public trust.

In the longer term, the Australian Government aims to use AI to deliver personalised services to its citizens, deliver services more efficiently and conveniently, and achieve a higher standard of care for its ageing population.

AI can help public servants achieve these goals through automating processes, improving service processing and response time, and providing AI-enabled interfaces which users can engage with, such as chatbots and virtual assistants.

However, AI can also lead to unfair or unintended outcomes due to bias in training data or hallucinations, the report noted.

According to the report, the trustworthy use of AI will require public servants to:

  1. Demonstrate integrity by remaining accountable for AI outcomes and transparent about AI use
  2. Demonstrate empathy by offering face-to-face services for those with greater vulnerabilities 
  3. Use AI in ways that improve service delivery for end-users
  4. Build internal skills and systems to implement AI, while educating the public on the impact of AI

The Australian Taxation Office currently uses AI to identify high-risk business activity statements to determine whether refunds can be issued or if further review is required, noted the report. Taxpayers can appeal the decision if staff decide to deny refunds…(More)”

The Open Sky


Essay by Lars Erik Schönander: “Any time you walk outside, satellites may be watching you from space. There are currently more than 8,000 active satellites in orbit, including over a thousand designed to observe the Earth.

Satellite technology has come a long way since its secretive inception during the Cold War, when a country’s ability to successfully operate satellites meant not only that it was capable of launching rockets into Earth orbit but that it had eyes in the sky. Today not only governments across the world but private enterprises too launch satellites, collect and analyze satellite imagery, and sell it to a range of customers, from government agencies to the person on the street. SpaceX’s Starlink satellites bring the Internet to places where conventional coverage is spotty or compromised. Satellite data allows the United States to track rogue ships and North Korean missile launches, while scientists track wildfires, floods, and changes in forest cover.

The industry’s biggest technical challenge, aside from acquiring the satellite imagery itself, has always been to analyze and interpret it. This is why new AI tools are set to drastically change how satellite imagery is used — and who uses it. For instance, Meta’s Segment Anything Model, a machine-learning tool designed to “cut out” discrete objects from images, is proving highly effective at identifying objects in satellite images.

But the biggest breakthrough will likely come from large language models — tools like OpenAI’s ChatGPT — that may soon allow ordinary people to query the Earth’s surface the way data scientists query databases. Achieving this goal is the ambition of companies like Planet Labs, which has launched hundreds of satellites into space and is working with Microsoft to build what it calls a “queryable Earth.” At this point, it is still easy to dismiss their early attempt as a mere toy. But as the computer scientist Paul Graham once noted, if people like a new invention that others dismiss as a toy, this is probably a good sign of its future success.

This means that satellite intelligence capabilities that were once restricted to classified government agencies, and even now belong only to those with bountiful money or expertise, are about to be open to anyone with an Internet connection…(More)”.

The Tragedy of AI Governance


Paper by Simon Chesterman: “Despite hundreds of guides, frameworks, and principles intended to make AI “ethical” or “responsible”, ever more powerful applications continue to be released ever more quickly. Safety and security teams are being downsized or sidelined to bring AI products to market. And a significant portion of AI developers apparently believe there is a real risk that their work poses an existential threat to humanity.

This contradiction between statements and action can be attributed to three factors that undermine the prospects for meaningful governance of AI. The first is the shift of power from public to private hands, not only in deployment of AI products but in fundamental research. The second is the wariness of most states about regulating the sector too aggressively, for fear that it might drive innovation elsewhere. The third is the dysfunction of global processes to manage collective action problems, epitomized by the climate crisis and now frustrating efforts to govern a technology that does not respect borders. The tragedy of AI governance is that those with the greatest leverage to regulate AI have the least interest in doing so, while those with the greatest interest have the least leverage.

Resolving these challenges either requires rethinking the incentive structures — or waiting for a crisis that brings the need for regulation and coordination into sharper focus…(More)”

Enhancing the European Administrative Space (ComPAct)


European Commission: “Efficient national public administrations are critical to transform EU and national policies into reality, to implement reforms to the benefit of people and business alike, and to channel investments towards the achievement of the green and digital transition, and greater competitiveness. At the same time, national public administrations are also under an increasing pressure to deal with polycrisis and with many competing priorities. 

For the first time, with the ComPAct, the Commission is proposing a strategic set of actions not only to support the public administrations in the Member States to become more resilient, innovative and skilled, but also to strengthen the administrative cooperation between them, thereby allowing to close existing gaps in policies and services at European level.

With the ComPAct, the Commission aims to enhance the European Administrative Space by promoting a common set of overarching principles underpinning the quality of public administration and reinforcing its support for the administrative modernisation of the Member States. The ComPAct will help Member States address the EU Skills Agenda and the actions under the European Year of Skills, deliver on the targets of the Digital Decade to have 100% of key public services accessible online by 2030, and shape the conditions for the economies and societies to deliver on the ambitious 2030 climate and energy targets. The ComPAct will also help EU enlargement countries on their path to building better public administrations…(More)”.

Learning Like a State: Statecraft in the Digital Age


Paper by Marion Fourcade and Jeff Gordon: “What does it mean to sense, see, and act like a state in the digital age? We examine the changing phenomenology, governance, and capacity of the state in the era of big data and machine learning. Our argument is threefold. First, what we call the dataist state may be less accountable than its predecessor, despite its promise of enhanced transparency and accessibility. Second, a rapid expansion of the data collection mandate is fueling a transformation in political rationality, in which data affordances increasingly drive policy strategies. Third, the turn to dataist statecraft facilitates a corporate reconstruction of the state. On the one hand, digital firms attempt to access and capitalize on data “minted” by the state. On the other hand, firms compete with the state in an effort to reinvent traditional public functions. Finally, we explore what it would mean for this dataist state to “see like a citizen” instead…(More)”.

Open-access reformers launch next bold publishing plan


Article by Layal Liverpool: “The group behind the radical open-access initiative Plan S has announced its next big plan to shake up research publishing — and this one could be bolder than the first. It wants all versions of an article and its associated peer-review reports to be published openly from the outset, without authors paying any fees, and for authors, rather than publishers, to decide when and where to first publish their work.

The group of influential funding agencies, called cOAlition S, has over the past five years already caused upheaval in the scholarly publishing world by pressuring more journals to allow immediate open-access publishing. Its new proposal, prepared by a working group of publishing specialists and released on 31 October, puts forward an even broader transformation in the dissemination of research.

It outlines a future “community-based” and “scholar-led” open-research communication system (see go.nature.com/45zyjh) in which publishers are no longer gatekeepers that reject submitted work or determine first publication dates. Instead, authors would decide when and where to publish the initial accounts of their findings, both before and after peer review. Publishers would become service providers, paid to conduct processes such as copy-editing, typesetting and handling manuscript submissions…(More)”.

Your Face Belongs to Us


Book by Kashmir Hill: “… was skeptical when she got a tip about a mysterious app called Clearview AI that claimed it could, with 99 percent accuracy, identify anyone based on just one snapshot of their face. The app could supposedly scan a face and, in just seconds, surface every detail of a person’s online life: their name, social media profiles, friends and family members, home address, and photos that they might not have even known existed. If it was everything it claimed to be, it would be the ultimate surveillance tool, and it would open the door to everything from stalking to totalitarian state control. Could it be true?

In this riveting account, Hill tracks the improbable rise of Clearview AI, helmed by Hoan Ton-That, an Australian computer engineer, and Richard Schwartz, a former Rudy Giuliani advisor, and its astounding collection of billions of faces from the internet. The company was boosted by a cast of controversial characters, including conservative provocateur Charles C. Johnson and billionaire Donald Trump backer Peter Thiel—who all seemed eager to release this society-altering technology on the public. Google and Facebook decided that a tool to identify strangers was too radical to release, but Clearview forged ahead, sharing the app with private investors, pitching it to businesses, and offering it to thousands of law enforcement agencies around the world.
      
Facial recognition technology has been quietly growing more powerful for decades. This technology has already been used in wrongful arrests in the United States. Unregulated, it could expand the reach of policing, as it has in China and Russia, to a terrifying, dystopian level.
     
Your Face Belongs to Us
 is a gripping true story about the rise of a technological superpower and an urgent warning that, in the absence of vigilance and government regulation, Clearview AI is one of many new technologies that challenge what Supreme Court Justice Louis Brandeis once called “the right to be let alone.”…(More)”.

Shifting policy systems – a framework for what to do and how to do it


Blog by UK Policy Lab: “Systems change is hard work, and it takes time. The reality is that no single system map or tool is enough to get you from point A to point B, from system now to system next. Over the last year, we have explored the latest in systems change theory and applied it to policymaking. In this four part blog series, we share our reflections on the wealth of knowledge we’ve gained working on intractable issues surrounding how support is delivered for people experiencing multiple disadvantage. Along the way, we realised that we need to make new tools to support policy teams to do this deep work in the future, and to see afresh the limitations of existing mental models for change and transformation.

Policy Lab has previously written about systems mapping as a useful process for understanding the interconnected nature of factors and actors that make up policy ecosystems. Here, we share our latest experimentation on how we can generate practical ideas for long-lasting and systemic change.

This blog includes:

  • An overview of what we did on our latest project – including the policy context, systems change frameworks we experimented with, and the bespoke project framework we created;
  • Our reflections on how we carried out the project;
  • A matrix which provides a practical guide for you to use this approach in your own work…(More)”.

Future Law, Ethics, and Smart Technologies


Book edited by John-Stewart Gordon: “This interdisciplinary textbook serves as a solid introduction to the future of legal education against the background of the widespread use of AI written by colleagues from different disciplines, e.g. law, philosophy/ethics, economy, and computer science, whose common interest concerns AI and its impact on legal and ethical issues. The book provides, first, a general overview of the effects of AI on major disciplines such as ethics, law, economy, political science, and healthcare. Secondly, it offers a comprehensive analysis of major key issues concerning law: (a) AI decision-making, (b) rights, status, and responsibility, (c) regulation and standardisation, and (d) education…(More)”.

Choosing AI’s Impact on the Future of Work 


Article by Daron Acemoglu & Simon Johnson  …“Too many commentators see the path of technology as inevitable. But the historical record is clear: technologies develop according to the vision and choices of those in positions of power. As we document in Power and Progress: Our 1,000-Year Struggle over Technology and Prosperity, when these choices are left entirely in the hands of a small elite, you should expect that group to receive most of the benefits, while everyone else bears the costs—potentially for a long time.

Rapid advances in AI threaten to eliminate many jobs, and not just those of writers and actors. Jobs with routine elements, such as in regulatory compliance or clerical work, and those that involve simple data collection, data summary, and writing tasks are likely to disappear.

But there are still two distinct paths that this AI revolution could take. One is the path of automation, based on the idea that AI’s role is to perform tasks as well as or better than people. Currently, this vision dominates in the US tech sector, where Microsoft and Google (and their ecosystems) are cranking hard to create new AI applications that can take over as many human tasks as possible.

The negative impact on people along the “just automate” path is easy to predict from prior waves of digital technologies and robotics. It was these earlier forms of automation that contributed to the decline of American manufacturing employment and the huge increase in inequality over the last four decades. If AI intensifies automation, we are very likely to get more of the same—a gap between capital and labor, more inequality between the professional class and the rest of the workers, and fewer good jobs in the economy….(More)”