How to worry wisely about AI


The Economist:  “Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart…and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an ngo. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (ai), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in ai has sparked anxiety about the potential dangers of the technology.

In particular, new “large language models” (llms)—the sort that powers Chatgpt, a chatbot made by Openai, a startup—have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji…(More)”.

Speaking in Tongues — Teaching Local Languages to Machines


Report by DIAL: “…Machines learn to talk to people by digesting digital content in languages people speak through a technique called Natural Language Processing (NLP). As things stand, only about 85 of the world’s approximately 7500 languages are represented in the major NLPs — and just 7 languages, with English being the most advanced, comprise the majority of the world’s digital knowledge corpus. Fortunately, many initiatives are underway to fill this knowledge gap. My new mini-report with Digital Impact Alliance (DIAL) highlights a few of them from Serbia, India, Estonia, and Africa.

The examples in the report are just a subset of initiatives on the ground to make digital services accessible to people in their local languages. They are a cause for excitement and hope (tempered by realistic expectations). A few themes across the initiatives include –

  • Despite the excitement and enthusiasm, most of the programs above are still at a very nascent stage — many may fail, and others will require investment and time to succeed. While countries such as India have initiated formal national NLP programs (one that is too early to assess), others such as Serbia have so far taken a more ad hoc approach.
  • Smaller countries like Estonia recognize the need for state intervention as the local population isn’t large enough to attract private sector investment. Countries will need to balance their local, cultural, and political interests against commercial realities as languages become digital or are digitally excluded.
  • Community engagement is an important component of almost all initiatives. India has set up a formal crowdsourcing program; other programs in Africa are experimenting with elements of participatory design and crowd curation.
  • While critics have accused ChatGPT and others of paying contributors from the global south very poorly for their labeling and other content services; it appears that many initiatives in the south are beginning to dabble with payment models to incentivize crowdsourcing and sustain contributions from the ground.
  • The engagement of local populations can ensure that NLP models learn appropriate cultural nuances, and better embody local social and ethical norms…(More)”.

AI translation is jeopardizing Afghan asylum claims


Article by Andrew Deck: “In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.

A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.

In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”

Mirkhail works with Respond Crisis Translation, a coalition of over 2,500 translators that provides interpretation and translation services for migrants and asylum seekers around the world. She told Rest of World this kind of small mistake can be life-changing for a refugee. In the wake of the Taliban’s return to power in Afghanistan, there is an urgent demand for crisis translators working in languages such as Pashto and Dari. Working alongside refugees, these translators can help clients navigate complex immigration systems, including drafting immigration forms such as asylum applications. But a new generation of machine translation tools is changing the landscape of this field — and adding a new set of risks for refugees…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

Harnessing Data Innovation for Migration Policy: A Handbook for Practitioners


Report by IOM: “The Practitioners’ Handbook provides first-hand insights into why and how non-traditional data sources can contribute to better understanding migration-related phenomena. The Handbook aims to (a) bridge the practical and technical aspects of using data innovations in migration statistics, (a) demonstrate the added value of using new data sources and innovative methodologies to analyse key migration topics that may be hard to fully grasp using traditional data sources, and (c) identify good practices in addressing issues of data access and collaboration with multiple stakeholders (including the private sector), ethical standards, and security and data protection issues…(More)” See also Big Data for Migration Alliance.

The Myth of Objective Data


Article by Melanie Feinberg: “The notion that human judgment pollutes scientific attempts to understand natural phenomena as they really are may seem like a stable and uncontroversial value. However, as Lorraine Daston and Peter Galison have established, objectivity is a fairly recent historical development.

In Daston and Galison’s account, which focuses on scientific visualization, objectivity arose in the 19th century, congruent with the development of photography. Before photography, scientific illustration attempted to portray an ideal exemplar rather than an actually existing specimen. In other words, instead of drawing a realistic portrait of an individual fruit fly — which has unique, idiosyncratic characteristics — an 18th-century scientific illustrator drew an ideal fruit fly. This ideal representation would better portray average fruit fly characteristics, even as no actual fruit fly is ever perfectly average.

With the advent of photography, drawings of ideal types began to lose favor. The machinic eye of the lens was seen as enabling nature to speak for itself, providing access to a truer, more objective reality than the human eye of the illustrator. Daston and Galison emphasize, however, that this initial confidence in the pure eye of the machine was swiftly undermined. Scientists soon realized that photographic devices introduce their own distortions into the images that they produce, and that no eye provides an unmediated view onto nature. From the perspective of scientific visualization, the idea that machines allow us to see true has long been outmoded. In everyday discourse, however, there is a continuing tendency to characterize the objective as that which speaks for itself without the interference of human perception, interpretation, judgment, and so on.

This everyday definition of objectivity particularly affects our understanding of data collection. If in our daily lives we tend to overlook the diverse, situationally textured sense-making actions that information seekers, conversation listeners, and other recipients of communicative acts perform to make automated information systems function, we are even less likely to acknowledge and value the interpretive work of data collectors, even as these actions create the conditions of possibility upon which data analysis can operate…(More)”.

What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

The Future of Consent: The Coming Revolution in Privacy and Consumer Trust


Report by Ogilvy: “The future of consent will be determined by how we – as individuals, nations, and a global species – evolve our understanding of what counts as meaningful consent. For consumers and users, the greatest challenge lies in connecting consent to a mechanism of relevant, personal control over their data. For businesses and other organizations, the task will be to recast consent as a driver of positive economic outcomes, rather than an obstacle.

In the coming years of digital privacy innovation, regulation, and increasing market maturity, everyone will need to think more deeply about their relationship with consent. As an initial step, we’ve assembled this snapshot on the current and future state of (meaningful) consent: what it means, what the obstacles are, and which critical changes we need to embrace to evolve…(More)”.

Workforce ecosystems and AI


Report by David Kiron, Elizabeth J. Altman, and Christoph Riedl: “Companies increasingly rely on an extended workforce (e.g., contractors, gig workers, professional service firms, complementor organizations, and technologies such as algorithmic management and artificial intelligence) to achieve strategic goals and objectives. When we ask leaders to describe how they define their workforce today, they mention a diverse array of participants, beyond just full- and part-time employees, all contributing in various ways. Many of these leaders observe that their extended workforce now comprises 30-50% of their entire workforce. For example, Novartis has approximately 100,000 employees and counts more than 50,000 other workers as external contributors. Businesses are also increasingly using crowdsourcing platforms to engage external participants in the development of products and services. Managers are thinking about their workforce in terms of who contributes to outcomes, not just by workers’ employment arrangements.

Our ongoing research on workforce ecosystems demonstrates that managing work across organizational boundaries with groups of interdependent actors in a variety of employment relationships creates new opportunities and risks for both workers and businesses. These are not subtle shifts. We define a workforce ecosystem as:

A structure that encompasses actors, from within the organization and beyond, working to create value for an organization. Within the ecosystem, actors work toward individual and collective goals with interdependencies and complementarities among the participants.

The emergence of workforce ecosystems has implications for management theory, organizational behavior, social welfare, and policymakers. In particular, issues surrounding work and worker flexibility, equity, and data governance and transparency pose substantial opportunities for policymaking.

At the same time, artificial intelligence (AI)—which we define broadly to include machine learning and algorithmic management—is playing an increasingly large role within the corporate context. The widespread use of AI is already displacing workers through automation, augmenting human performance at work, and creating new job categories…(More)”.

The Surveillance Ad Model Is Toxic — Let’s Not Install Something Worse


Article by Elizabeth M. Renieris: “At this stage, law and policy makerscivil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.

In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.

But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal…(More)”.