What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

The Future of Consent: The Coming Revolution in Privacy and Consumer Trust


Report by Ogilvy: “The future of consent will be determined by how we – as individuals, nations, and a global species – evolve our understanding of what counts as meaningful consent. For consumers and users, the greatest challenge lies in connecting consent to a mechanism of relevant, personal control over their data. For businesses and other organizations, the task will be to recast consent as a driver of positive economic outcomes, rather than an obstacle.

In the coming years of digital privacy innovation, regulation, and increasing market maturity, everyone will need to think more deeply about their relationship with consent. As an initial step, we’ve assembled this snapshot on the current and future state of (meaningful) consent: what it means, what the obstacles are, and which critical changes we need to embrace to evolve…(More)”.

Workforce ecosystems and AI


Report by David Kiron, Elizabeth J. Altman, and Christoph Riedl: “Companies increasingly rely on an extended workforce (e.g., contractors, gig workers, professional service firms, complementor organizations, and technologies such as algorithmic management and artificial intelligence) to achieve strategic goals and objectives. When we ask leaders to describe how they define their workforce today, they mention a diverse array of participants, beyond just full- and part-time employees, all contributing in various ways. Many of these leaders observe that their extended workforce now comprises 30-50% of their entire workforce. For example, Novartis has approximately 100,000 employees and counts more than 50,000 other workers as external contributors. Businesses are also increasingly using crowdsourcing platforms to engage external participants in the development of products and services. Managers are thinking about their workforce in terms of who contributes to outcomes, not just by workers’ employment arrangements.

Our ongoing research on workforce ecosystems demonstrates that managing work across organizational boundaries with groups of interdependent actors in a variety of employment relationships creates new opportunities and risks for both workers and businesses. These are not subtle shifts. We define a workforce ecosystem as:

A structure that encompasses actors, from within the organization and beyond, working to create value for an organization. Within the ecosystem, actors work toward individual and collective goals with interdependencies and complementarities among the participants.

The emergence of workforce ecosystems has implications for management theory, organizational behavior, social welfare, and policymakers. In particular, issues surrounding work and worker flexibility, equity, and data governance and transparency pose substantial opportunities for policymaking.

At the same time, artificial intelligence (AI)—which we define broadly to include machine learning and algorithmic management—is playing an increasingly large role within the corporate context. The widespread use of AI is already displacing workers through automation, augmenting human performance at work, and creating new job categories…(More)”.

The Surveillance Ad Model Is Toxic — Let’s Not Install Something Worse


Article by Elizabeth M. Renieris: “At this stage, law and policy makerscivil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.

In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.

But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal…(More)”.

Data Maturity Assessment for Government


UK Government: “The Data Maturity Assessment (DMA) for Government is a robust and comprehensive framework, designed by the public sector for the public sector. The DMA represents a big step forward in our shared ambition to establish and strengthen the data foundations in government by enabling a granular view of the current status of our data environments.

The systematic and detailed picture that the DMA results provide can be used to deliver value in the data function and across the enterprise. Maturity results, and the progression behaviours/features outlined in the DMA, will be essential to reviewing and setting data strategy. DMA outputs provide a way to communicate and evidence how the data ecosystem is critical to the business. When considered in the context of organisational priorities and responsibilities, DMA outputs can assist in:

  • identifying and mitigating strategic risk arising from low data maturity, and where higher maturity needs to be maintained
  • targeting and prioritising investment in the most important data initiatives
  • assuring the data environment for new services and programmes…(More)”.

Whose data commons? Whose city?


Blog by Gijs van Maanen and Anna Artyushina: “In 2020, the notion of data commons became a staple of the new European Data Governance Strategy, which envisions data cooperatives as key players of the European Union’s (EU) emerging digital market. In this new legal landscape, public institutions, businesses, and citizens are expected to share their data with the licensed data-governance entities that will oversee its responsible reuse. In 2022, the Open Future Foundation released several white papers where the NGO (non-govovernmental organisation) detailed a vision for the publicly governed and funded EU level data commons. Some academic researchers see data commons as a way to break the data silos maintained and exploited by Big Tech and, potentially, dismantle surveillance capitalism.

In this blog post, we discuss data commons as a concept and practice. Our argument here is that, for data commons to become a (partial) solution to the issues caused by data monopolies, they need to be politicised. As smart city scholar Shannon Mattern pointedly argues, the city is not a computer. This means that digitization and datafication of our cities involves making choices about what is worth digitising and whose interests are prioritised. These choices and their implications must be foregrounded when we discuss data commons or any emerging forms of data governance. It is important to ask whose data is made common and, subsequently, whose city we will end up living in. ..(More)”

You Can’t Regulate What You Don’t Understand


Article by Tim O’Reilly: “The world changed on November 30, 2022 as surely as it did on August 12, 1908 when the first Model T left the Ford assembly line. That was the date when OpenAI released ChatGPT, the day that AI emerged from research labs into an unsuspecting world. Within two months, ChatGPT had over a hundred million users—faster adoption than any technology in history.

The hand wringing soon began…

All of these efforts reflect the general consensus that regulations should address issues like data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI safety and responsibility guidelines cite those same goals, but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? They write:

“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

But whose human values? Those of the benevolent idealists that most AI critics aspire to be? Those of a public company bound to put shareholder value ahead of customers, suppliers, and society as a whole? Those of criminals or rogue states bent on causing harm to others? Those of someone well meaning who, like Aladdin, expresses an ill-considered wish to an all-powerful AI genie?

There is no simple way to solve the alignment problem. But alignment will be impossible without robust institutions for disclosure and auditing. If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved. That is a crucial first step, and we should take it immediately. These systems are still very much under human control. For now, at least, they do what they are told, and when the results don’t match expectations, their training is quickly improved. What we need to know is what they are being told.

What should be disclosed? There is an important lesson for both companies and regulators in the rules by which corporations—which science-fiction writer Charlie Stross has memorably called “slow AIs”—are regulated. One way we hold companies accountable is by requiring them to share their financial results compliant with Generally Accepted Accounting Principles or the International Financial Reporting Standards. If every company had a different way of reporting its finances, it would be impossible to regulate them…(More)”

Future-proofing the city: A human rights-based approach to governing algorithmic, biometric and smart city technologies


Introduction to Special Issue by Alina Wernick, and Anna Artyushina: “While the GDPR and other EU laws seek to mitigate a range of potential harms associated with smart cities, the compliance with and enforceability of these regulations remain an issue. In addition, these proposed regulations do not sufficiently address the collective harms associated with the deployment of biometric technologies and artificial intelligence. Another relevant question is whether the initiatives put forward to secure fundamental human rights in the digital realm account for the issues brought on by the deployment of technologies in city spaces. In this special issue, we employ the smart city notion as a point of connection for interdisciplinary research on the human rights implications of the algorithmic, biometric and smart city technologies and the policy responses to them. The articles included in the special issue analyse the latest European regulations as well as soft law, and the policy frameworks that are currently at work in the regions where the GDPR does not apply…(More)”.

Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models


Paper by Shaolei Ren, Pengfei Li, Jianyi Yang, and Mohammad A. Islam: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.

Slow-governance in smart cities: An empirical study of smart intersection implementation in four US college towns


Paper by Madelyn Rose Sanfilippo and Brett Frischmann: “Cities cannot adopt supposedly smart technological systems and protect human rights without developing appropriate data governance, because technologies are not value-neutral. This paper proposes a deliberative, slow-governance approach to smart tech in cities. Inspired by the Governing Knowledge Commons (GKC) framework and past case studies, we empirically analyse the adoption of smart intersection technologies in four US college towns to evaluate and extend knowledge commons governance approaches to address human rights concerns. Our proposal consists of a set of questions that should guide community decision-making, extending the GKC framework via an incorporation of human-rights impact assessments and a consideration of capabilities approaches to human rights. We argue that such a deliberative, slow-governance approach enables adaptation to local norms and more appropriate community governance of smart tech in cities. By asking and answering key questions throughout smart city planning, procurement, implementation and management processes, cities can respect human rights, interests and expectations…(More)”.