AI ethics: the case for including animals


Paper by Peter Singer & Yip Fai Tse: “The ethics of artificial intelligence, or AI ethics, is a rapidly growing field, and rightly so. While the range of issues and groups of stakeholders concerned by the field of AI ethics is expanding, with speculation about whether it extends even to the machines themselves, there is a group of sentient beings who are also affected by AI, but are rarely mentioned within the field of AI ethics—the nonhuman animals. This paper seeks to explore the kinds of impact AI has on nonhuman animals, the severity of these impacts, and their moral implications. We hope that this paper will facilitate the development of a new field of philosophical and technical research regarding the impacts of AI on animals, namely, the ethics of AI as it affects nonhuman animals…(More)”.

Towards Human-Centric Algorithmic Governance


Blog by Zeynep Engin: “It is no longer news to say that the capabilities afforded by Data Science, AI and their associated technologies (such as Digital Twins, Smart Cities, Ledger Systems and other platforms) are poised to revolutionise governance, radically transforming the way democratic processes work, citizen services are provided, and justice is delivered. Emerging applications range from the way election campaigns are run and how crises at population level are managed (e.g. pandemics) to everyday operations like simple parking enforcement and traffic management, and to decisions at critical individual junctures, such as hiring or sentencing decisions. What it means to be a ‘human’ is also a hot topic for both scholarly and everyday discussions, since our societal interactions and values are also shifting fast in an increasingly digital and data-driven world.

As a millennial who grew up in a ‘developing’ economy in the ’90s and later established a cross-sector career in a ‘developed’ economy in the fields of data for policy and algorithmic governance, I believe I can credibly claim a pertinent, hands-on experience of the transformation from a fully analogue world into a largely digital one. I started off trying hard to find sufficient printed information to refer to in my term papers at secondary school, gradually adapting to trying hard to extract useful information amongst practically unlimited resources available online today. The world has become a lot more connected: communities are formed online, goods and services customised to individual tastes and preferences, work and education are increasingly hybrid, reducing dependency on physical environment, geography and time zones. Despite all these developments in nearly every aspect of our lives, one thing that has persisted in the face of this change is the nature of collective decision-making, particularly at the civic/governmental level. It still comprises the same election cycles with more or less similar political incentives and working practices, and the same type of politicians, bureaucracies, hierarchies and networks making and executing important (and often suboptimal) decisions on behalf of the public. Unelected private sector stakeholders in the meantime are quick to fill the growing gap — they increasingly make policies that affect large populations and define the public discourse, to primarily maximise their profit behind their IP protection walls…(More)”.

The UK Algorithmic Transparency Standard: A Qualitative Analysis of Police Perspectives


Paper by Marion Oswald, Luke Chambers, Ellen P. Goodman, Pam Ugwudike, and Miri Zilka: “1. The UK Government’s draft ‘Algorithmic Transparency Standard’ is intended to provide a standardised way for public bodies and government departments to provide information about how algorithmic tools are being used to support decisions. The research discussed in this report was conducted in parallel to the piloting of the Standard by the Cabinet Office and the Centre for Data Ethics and Innovation.
2. We conducted semi-structured interviews with respondents from across UK policing and commercial bodies involved in policing technologies. Our aim was to explore the implications for police forces of participation in the Standard, to identify rewards, risks, challenges for the police, and areas where the Standard could be improved, and therefore to contribute to the exploration of policy options for expansion of participation in the Standard.
3. Algorithmic transparency is both achievable for policing and could bring significant rewards. A key reward of police participation in the Standard is that it provides the opportunity to demonstrate proficient implementation of technology-driven policing, thus enhancing earned trust. Research participants highlighted the public good that could result from the considered use of algorithms.
4. Participants noted, however, a risk of misperception of the dangers of policing technology, especially if use of algorithmic tools was not appropriately compared to the status quo and current methods…(More)”.

Artificial Intelligence and Democracy


Open Access Book by Jérôme Duberry on “Risks and Promises of AI-Mediated Citizen–Government Relations….What role does artificial intelligence (AI) play in the citizen–government rela-tions? Who is using this technology and for what purpose? How does the use of AI influence power relations in policy-making, and the trust of citizens in democratic institutions? These questions led to the writing of this book. While the early developments of e-democracy and e-participation can be traced back to the end of the 20th century, the growing adoption of smartphones and mobile applications by citizens, and the increased capacity of public adminis-trations to analyze big data, have enabled the emergence of new approaches. Online voting, online opinion polls, online town hall meetings, and online dis-cussion lists of the 1990s and early 2000s have evolved into new generations of policy-making tactics and tools, enabled by the most recent developments in information and communication technologies (ICTs) (Janssen & Helbig, 2018). Online platforms, advanced simulation websites, and serious gaming tools are progressively used on a larger scale to engage citizens, collect their opinions, and involve them in policy processes…(More)”.

First regulatory sandbox on Artificial Intelligence presented


European Commission: “The sandbox aims to bring competent authorities close to companies that develop AI to define best practices that will guide the implementation of the future European Commission’s AI Regulation (Artificial Intelligence Act). This would also ensure that the legistlation can be implemented in two years.

The regulatory sandbox is a way to connect innovators and regulators and provide a controlled environment for them to cooperate. Such a collaboration between regulators and innovators should facilitates the development, testing and validation of innovative AI systems with a view to ensuring compliance with the requirements of the AI Regulation.

While the entire ecosystem is preparing for the AI Act, this sandbox initiative is expected to generate easy-to-follow, future-proof best practice guidelines and other supporting materials. Such outputs are expected to facilitate the implementation of rules by companies, in particular SMEs and start-ups. 

This sandbox pilot initiated by the Spanish government will look at operationalising the requirements of the future AI regulation as well as other features such as conformity assessments or post-market activities.

Thanks to this pilot experience, obligations and how to implement them will be documented, for AI system providers (participants of the sandbox) and systematised in a good practice and lessons learnt implementation guidelines. The deliverables will also include methods to control and follow up that are useful for supervising national authorities in charge of implementing the supervisory mechanisms that the regulation stablishes.

In order to strengthen the cooperation of all possible actors at the European level, this exercise will remain open to other Member States that will be able to follow or join the pilot in what could potentially become a pan-European AI regulatory sandbox. Cooperation at EU level with other Member States will be pursued within the framework of the Expert Group on AI and Digitalisation of Businesses set up by the Commission.

The financing of this sandbox is drawn from the Recovery and Resilience Funds assigned to the Spanish Government, through the Spanish Recovery, Transformation and Resilience Plan, and in particular through the Spanish National AI Strategy (Component 16 of the Plan). The overall budget for the pilot will be approximately 4.3M EUR for approximately three years…(More)”.

The Model Is The Message


Essay by Benjamin Bratton and Blaise Agüera y Arcas: “An odd controversy appeared in the news cycle last month when a Google engineer, Blake Lemoine, was placed on leave after publicly releasing transcripts of conversations with LaMDA, a chatbot based on a Large Language Model (LLM) that he claims is conscious, sentient and a person.

Like most other observers, we do not conclude that LaMDA is conscious in the ways that Lemoine believes it to be. His inference is clearly based in motivated anthropomorphic projection. At the same time, it is also possible that these kinds of artificial intelligence (AI) are “intelligent” — and even “conscious” in some way — depending on how those terms are defined.

Still, neither of these terms can be very useful if they are defined in strongly anthropocentric ways. An AI may also be one and not the other, and it may be useful to distinguish sentience from both intelligence and consciousness. For example, an AI may be genuinely intelligent in some way but only sentient in the restrictive sense of sensing and acting deliberately on external information. Perhaps the real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand. A more precise vocabulary is essential.

AI and the philosophy of AI have deeply intertwined histories, each bending the other in uneven ways. Just like core AI research, the philosophy of AI goes through phases. Sometimes it is content to apply philosophy (“what would Kant say about driverless cars?”) and sometimes it is energized to invent new concepts and terms to make sense of technologies before, during and after their emergence. Today, we need more of the latter.

We need more specific and creative language that can cut the knots around terms like “sentience,” “ethics,” “intelligence,” and even “artificial,” in order to name and measure what is already here and orient what is to come. Without this, confusion ensues — for example, the cultural split between those eager to speculate on the sentience of rocks and rivers yet dismiss AI as corporate PR vs. those who think their chatbots are persons because all possible intelligence is humanlike in form and appearance. This is a poor substitute for viable, creative foresight. The curious case of synthetic language  — language intelligently produced or interpreted by machines — is exemplary of what is wrong with present approaches, but also demonstrative of what alternatives are possible…(More)”.

Artificial Intelligence in the City: Building Civic Engagement and Public Trust


Collection of essays edited by Ana Brandusescu, Ana, and Jess Reia: “After navigating various challenging policy and regulatory contexts over the years, in different regions, we joined efforts to create a space that offers possibilities for engagement focused on the expertise, experiences and hopes to shape the future of technology in urban areas. The AI in the City project emerged as an opportunity to connect people, organizations, and resources in the networks we built over the last decade of work on research and advocacy in tech policy. Sharing non-Western and Western perspectives from five continents, the contributors questioned, challenged, and envisioned ways public trust and meaningful civic engagement can flourish and persist as data and AI become increasingly pervasive in our lives. This collection of essays brings together a group of multidisciplinary scholars, activists, and practitioners working on a diverse range of initiatives to map strategies going forward. Divided into five parts, the collection brings into focus: 1) Meaningful engagement and public participation; 2) Addressing inequalities and building trust; 3) Public and private boundaries in tech policy; 4) Legal perspectives and mechanisms for accountability; and 5) New directions for local and urban governance. The focus on civil society and academia was deliberate: a way to listen to and learn with people who have dedicated many years to public interest advocacy, governance and policy that represents the interests of their communities…(More)”.

Your Boss Is an Algorithm: Artificial Intelligence, Platform Work and Labour


Book by Antonio Aloisi and Valerio De Stefano: “What effect do robots, algorithms, and online platforms have on the world of work? Using case studies and examples from across the EU, the UK, and the US, this book provides a compass to navigate this technological transformation as well as the regulatory options available, and proposes a new map for the era of radical digital advancements.

From platform work to the gig-economy and the impact of artificial intelligence, algorithmic management, and digital surveillance on workplaces, technology has overwhelming consequences for everyone’s lives, reshaping the labour market and straining social institutions. Contrary to preliminary analyses forecasting the threat of human work obsolescence, the book demonstrates that digital tools are more likely to replace managerial roles and intensify organisational processes in workplaces, rather than opening the way for mass job displacement.

Can flexibility and protection be reconciled so that legal frameworks uphold innovation? How can we address the pervasive power of AI-enabled monitoring? How likely is it that the gig-economy model will emerge as a new organisational paradigm across sectors? And what can social partners and political players do to adopt effective regulation?

Technology is never neutral. It can and must be governed, to ensure that progress favours the many. Digital transformation can be an essential ally, from the warehouse to the office, but it must be tested in terms of social and political sustainability, not only through the lenses of economic convenience. Your Boss Is an Algorithm offers a guide to explore these new scenarios, their promises, and perils…(More)”

Human-centred mechanism design with Democratic AI


Paper by Raphael Koster et al: “Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation…(More)”.

Crime Prediction Keeps Society Stuck in the Past


Article by Chris Gilliard: “…All of these policing systems operate on the assumption that the past determines the future. In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation “restrict the future to the past.” In other words, these systems prevent the future in order to “predict” it—they ensure that the future will be just the same as the past was.

“If the captured and curated past is racist and sexist,” Chun writes, “these algorithms and models will only be verified as correct if they make sexist and racist predictions.” This is partly a description of the familiar garbage-in/garbage-out problem with all data analytics, but it’s something more: Ironically, the putatively “unbiased” technology sold to us by promoters is said to “work” precisely when it tells us that what is contingent in history is in fact inevitable and immutable. Rather than helping us to manage social problems like racism as we move forward, as the McDaniel case shows in microcosm, these systems demand that society not change, that things that we should try to fix instead must stay exactly as they are.

It’s a rather glaring observation that predictive policing tools are rarely if ever (with the possible exception of the parody “White Collar Crime Risk Zone” project) focused on wage theft or various white collar crimes, even though the dollar amounts of those types of offenses far outstrip property crimes in terms of dollar value by several orders of magnitude. This gap exists because of how crime exists in the popular imagination. For instance, news reports in recent weeks bludgeoned readers with reports of a so-called “crime wave” of shoplifting at high-end stores. Yet just this past February, Amazon agreed to pay regulators a whopping $61.7 million, the amount the FTC says the company shorted drivers in a two-and-a-half-year period. That story received a fraction of the coverage, and aside from the fine, there will be no additional charges.

The algorithmic crystal ball that promises to predict and forestall future crimes works from a fixed notion of what a criminal is, where crimes occur, and how they are prosecuted (if at all). Those parameters depend entirely on the power structure empowered to formulate them—and very often the explicit goal of those structures is to maintain existing racial and wealth hierarchies. This is the same set of carceral logics that allow the placement of children into gang databases, or the development of a computational tool to forecast which children will become criminals. The process of predicting the lives of children is about cementing existing realities rather than changing them. Entering children into a carceral ranking system is in itself an act of violence, but as in the case of McDaniel, it also nearly guarantees that the system that sees them as potential criminals will continue to enact violence on them throughout their lifetimes…(More)”.