Assessing and Suing an Algorithm


Report by Elina Treyger, Jirka Taylor, Daniel Kim, and Maynard A. Holliday: “Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?

Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers’ understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment…(More)”.

AI and Democracy’s Digital Identity Crisis


Essay by Shrey Jain, Connor Spelliscy, Samuel Vance-Law and Scott Moore: “AI-enabled tools have become sophisticated enough to allow a small number of individuals to run disinformation campaigns of an unprecedented scale. Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder. By understanding how identity attestations are positioned across the spectrum of decentralization, we can gain a better understanding of the costs and benefits of various attestations. In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based, and include examples such as e-Estonia, China’s social credit system, Worldcoin, OAuth, X (formerly Twitter), Gitcoin Passport, and EAS. We believe that the most resilient systems create an identity that evolves and is connected to a network of similarly evolving identities that verify one another. In this type of system, each entity contributes its respective credibility to the attestation process, creating a larger, more comprehensive set of attestations. We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors. However, governments will likely attempt to mitigate these risks by implementing centralized identity authentication systems; these centralized systems could themselves pose risks to the democratic processes they are built to defend. We therefore recommend that policymakers support the development of standards-setting organizations for identity, provide legal clarity for builders of decentralized tooling, and fund research critical to effective identity authentication systems…(More)”

The Bletchley Declaration


Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023: “In the context of our cooperation, and to inform action at the national and international levels, our agenda for addressing frontier AI risk will focus on:

  • identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

In furtherance of this agenda, we resolve to support an internationally inclusive network of scientific research on frontier AI safety that encompasses and complements existing and new multilateral, plurilateral and bilateral collaboration, including through existing international fora and other relevant initiatives, to facilitate the provision of the best science available for policy making and the public good.

In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all. We look forward to meeting again in 2024…(More)”.

Enterprise Value and the Value of Data


Paper by Dan Ciuriak: “Data is often said to be the most valuable commodity of our age. It is a curiosity, therefore, that it remains largely invisible on the balance sheets of companies and largely unmeasured in our national economic accounts. This paper comments on the problems of using cost-based or transactions-based methods to establish value for a nation’s data in the system of national accounts and suggests that this should be complemented with value of economic rents attributable to data. This rent is part of enterprise value; accordingly, an indicator is required as an instrumental variable for the use of data for value creation within firms. The paper argues that traditional accounting looks through the firm to its tangible (and certain intangible) assets; that may no longer be feasible in measuring and understanding the data-driven economy…(More)”

AI in public services will require empathy, accountability


Article by Yogesh Hirdaramani: “The Australian Government Department of the Prime Minister and Cabinet has released the first of its Long Term Insights Briefing, which focuses on how the Government can integrate artificial intelligence (AI) into public services while maintaining the trustworthiness of public service delivery.

Public servants need to remain accountable and transparent with their use of AI, continue to demonstrate empathy for the people they serve, use AI to better meet people’s needs, and build AI literacy amongst the Australian public, the report stated.

The report also cited a forthcoming study that found that Australian residents with a deeper understanding of AI are more likely to trust the Government’s use of AI in service delivery. However,more than half of survey respondents reported having little knowledge of AI.

Key takeaways

The report aims to supplement current policy work on how AI can be best governed in the public service to realise its benefits while maintaining public trust.

In the longer term, the Australian Government aims to use AI to deliver personalised services to its citizens, deliver services more efficiently and conveniently, and achieve a higher standard of care for its ageing population.

AI can help public servants achieve these goals through automating processes, improving service processing and response time, and providing AI-enabled interfaces which users can engage with, such as chatbots and virtual assistants.

However, AI can also lead to unfair or unintended outcomes due to bias in training data or hallucinations, the report noted.

According to the report, the trustworthy use of AI will require public servants to:

  1. Demonstrate integrity by remaining accountable for AI outcomes and transparent about AI use
  2. Demonstrate empathy by offering face-to-face services for those with greater vulnerabilities 
  3. Use AI in ways that improve service delivery for end-users
  4. Build internal skills and systems to implement AI, while educating the public on the impact of AI

The Australian Taxation Office currently uses AI to identify high-risk business activity statements to determine whether refunds can be issued or if further review is required, noted the report. Taxpayers can appeal the decision if staff decide to deny refunds…(More)”

Enhancing the European Administrative Space (ComPAct)


European Commission: “Efficient national public administrations are critical to transform EU and national policies into reality, to implement reforms to the benefit of people and business alike, and to channel investments towards the achievement of the green and digital transition, and greater competitiveness. At the same time, national public administrations are also under an increasing pressure to deal with polycrisis and with many competing priorities. 

For the first time, with the ComPAct, the Commission is proposing a strategic set of actions not only to support the public administrations in the Member States to become more resilient, innovative and skilled, but also to strengthen the administrative cooperation between them, thereby allowing to close existing gaps in policies and services at European level.

With the ComPAct, the Commission aims to enhance the European Administrative Space by promoting a common set of overarching principles underpinning the quality of public administration and reinforcing its support for the administrative modernisation of the Member States. The ComPAct will help Member States address the EU Skills Agenda and the actions under the European Year of Skills, deliver on the targets of the Digital Decade to have 100% of key public services accessible online by 2030, and shape the conditions for the economies and societies to deliver on the ambitious 2030 climate and energy targets. The ComPAct will also help EU enlargement countries on their path to building better public administrations…(More)”.

Data Equity: Foundational Concepts for Generative AI


WEF Report: “This briefing paper focuses on data equity within foundation models, both in terms of the impact of Generative AI (genAI) on society and on the further development of genAI tools.

GenAI promises immense potential to drive digital and social innovation, such as improving efficiency, enhancing creativity and augmenting existing data. GenAI has the potential to democratize access and usage of technologies. However, left unchecked, it could deepen inequities. With the advent of genAI significantly increasing the rate at which AI is deployed and developed, exploring frameworks for data equity is more urgent than ever.

The goals of the briefing paper are threefold: to establish a shared vocabulary to facilitate collaboration and dialogue; to scope initial concerns to establish a framework for inquiry on which stakeholders can focus; and to shape future development of promising technologies.

The paper represents a first step in exploring and promoting data equity in the context of genAI. The proposed definitions, framework and recommendations are intended to proactively shape the development of promising genAI technologies…(More)”.

Artificial intelligence in government: Concepts, standards, and a unified framework


Paper by Vincent J. Straub, Deborah Morgan, Jonathan Bright, Helen Margetts: “Recent advances in artificial intelligence (AI), especially in generative language modelling, hold the promise of transforming government. Given the advanced capabilities of new AI systems, it is critical that these are embedded using standard operational procedures, clear epistemic criteria, and behave in alignment with the normative expectations of society. Scholars in multiple domains have subsequently begun to conceptualize the different forms that AI applications may take, highlighting both their potential benefits and pitfalls. However, the literature remains fragmented, with researchers in social science disciplines like public administration and political science, and the fast-moving fields of AI, ML, and robotics, all developing concepts in relative isolation. Although there are calls to formalize the emerging study of AI in government, a balanced account that captures the full depth of theoretical perspectives needed to understand the consequences of embedding AI into a public sector context is lacking. Here, we unify efforts across social and technical disciplines by first conducting an integrative literature review to identify and cluster 69 key terms that frequently co-occur in the multidisciplinary study of AI. We then build on the results of this bibliometric analysis to propose three new multifaceted concepts for understanding and analysing AI-based systems for government (AI-GOV) in a more unified way: (1) operational fitness, (2) epistemic alignment, and (3) normative divergence. Finally, we put these concepts to work by using them as dimensions in a conceptual typology of AI-GOV and connecting each with emerging AI technical measurement standards to encourage operationalization, foster cross-disciplinary dialogue, and stimulate debate among those aiming to rethink government with AI…(More)”.

A Feasibility Study of Differentially Private Summary Statistics and Regression Analyses with Evaluations on Administrative and Survey Data


Report by Andrés F. Barrientos, Aaron R. Williams, Joshua Snoke, Claire McKay Bowen: “Federal administrative data, such as tax data, are invaluable for research, but because of privacy concerns, access to these data is typically limited to select agencies and a few individuals. An alternative to sharing microlevel data is to allow individuals to query statistics without directly accessing the confidential data. This paper studies the feasibility of using differentially private (DP) methods to make certain queries while preserving privacy. We also include new methodological adaptations to existing DP regression methods for using new data types and returning standard error estimates. We define feasibility as the impact of DP methods on analyses for making public policy decisions and the queries accuracy according to several utility metrics. We evaluate the methods using Internal Revenue Service data and public-use Current Population Survey data and identify how specific data features might challenge some of these methods. Our findings show that DP methods are feasible for simple, univariate statistics but struggle to produce accurate regression estimates and confidence intervals. To the best of our knowledge, this is the first comprehensive statistical study of DP regression methodology on real, complex datasets, and the findings have significant implications for the direction of a growing research field and public policy…(More)”.

Governing Urban Data for the Public Interest


Report by The New Hanse: “…This report represents the culmination of our efforts and offers actionable guidelines for European cities seeking to harness the power of data for the public good.

The key recommendations outlined in the report are:

1. Shift the Paradigm towards Democratic Control of Data: Advocate for a policy that defaults to making urban data accessible, requiring private data holders to share in the public interest.

2. Provide Legal Clarity in a Dynamic Environment: Address legal uncertainties by balancing privacy and confidentiality needs with the public interest in data accessibility, working collaboratively with relevant authorities at national and EU level.

3. Build a Data Commons Repository of Use cases: Streamline data sharing efforts by establishing a standardised use case repository with common technical frameworks, procedures, and contracts.

4. Set up an Urban Data Intermediary for the Public Interest: Institutionalise data sharing, by building urban data intermediaries to address complexities, following principles of public purpose, transparency, and accountability.

5. Learning from the Hamburg Experiment and Scale it across Europe: Embrace experimentation as a vital step, even if outcomes are uncertain, to adapt processes for future innovations. Experiments at the local level can inform policy and scale nationally and across Europe…(More)”.