Speaking in Tongues — Teaching Local Languages to Machines


Report by DIAL: “…Machines learn to talk to people by digesting digital content in languages people speak through a technique called Natural Language Processing (NLP). As things stand, only about 85 of the world’s approximately 7500 languages are represented in the major NLPs — and just 7 languages, with English being the most advanced, comprise the majority of the world’s digital knowledge corpus. Fortunately, many initiatives are underway to fill this knowledge gap. My new mini-report with Digital Impact Alliance (DIAL) highlights a few of them from Serbia, India, Estonia, and Africa.

The examples in the report are just a subset of initiatives on the ground to make digital services accessible to people in their local languages. They are a cause for excitement and hope (tempered by realistic expectations). A few themes across the initiatives include –

  • Despite the excitement and enthusiasm, most of the programs above are still at a very nascent stage — many may fail, and others will require investment and time to succeed. While countries such as India have initiated formal national NLP programs (one that is too early to assess), others such as Serbia have so far taken a more ad hoc approach.
  • Smaller countries like Estonia recognize the need for state intervention as the local population isn’t large enough to attract private sector investment. Countries will need to balance their local, cultural, and political interests against commercial realities as languages become digital or are digitally excluded.
  • Community engagement is an important component of almost all initiatives. India has set up a formal crowdsourcing program; other programs in Africa are experimenting with elements of participatory design and crowd curation.
  • While critics have accused ChatGPT and others of paying contributors from the global south very poorly for their labeling and other content services; it appears that many initiatives in the south are beginning to dabble with payment models to incentivize crowdsourcing and sustain contributions from the ground.
  • The engagement of local populations can ensure that NLP models learn appropriate cultural nuances, and better embody local social and ethical norms…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

Harnessing Data Innovation for Migration Policy: A Handbook for Practitioners


Report by IOM: “The Practitioners’ Handbook provides first-hand insights into why and how non-traditional data sources can contribute to better understanding migration-related phenomena. The Handbook aims to (a) bridge the practical and technical aspects of using data innovations in migration statistics, (a) demonstrate the added value of using new data sources and innovative methodologies to analyse key migration topics that may be hard to fully grasp using traditional data sources, and (c) identify good practices in addressing issues of data access and collaboration with multiple stakeholders (including the private sector), ethical standards, and security and data protection issues…(More)” See also Big Data for Migration Alliance.

What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

You Can’t Regulate What You Don’t Understand


Article by Tim O’Reilly: “The world changed on November 30, 2022 as surely as it did on August 12, 1908 when the first Model T left the Ford assembly line. That was the date when OpenAI released ChatGPT, the day that AI emerged from research labs into an unsuspecting world. Within two months, ChatGPT had over a hundred million users—faster adoption than any technology in history.

The hand wringing soon began…

All of these efforts reflect the general consensus that regulations should address issues like data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI safety and responsibility guidelines cite those same goals, but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? They write:

“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

But whose human values? Those of the benevolent idealists that most AI critics aspire to be? Those of a public company bound to put shareholder value ahead of customers, suppliers, and society as a whole? Those of criminals or rogue states bent on causing harm to others? Those of someone well meaning who, like Aladdin, expresses an ill-considered wish to an all-powerful AI genie?

There is no simple way to solve the alignment problem. But alignment will be impossible without robust institutions for disclosure and auditing. If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved. That is a crucial first step, and we should take it immediately. These systems are still very much under human control. For now, at least, they do what they are told, and when the results don’t match expectations, their training is quickly improved. What we need to know is what they are being told.

What should be disclosed? There is an important lesson for both companies and regulators in the rules by which corporations—which science-fiction writer Charlie Stross has memorably called “slow AIs”—are regulated. One way we hold companies accountable is by requiring them to share their financial results compliant with Generally Accepted Accounting Principles or the International Financial Reporting Standards. If every company had a different way of reporting its finances, it would be impossible to regulate them…(More)”

Future-proofing the city: A human rights-based approach to governing algorithmic, biometric and smart city technologies


Introduction to Special Issue by Alina Wernick, and Anna Artyushina: “While the GDPR and other EU laws seek to mitigate a range of potential harms associated with smart cities, the compliance with and enforceability of these regulations remain an issue. In addition, these proposed regulations do not sufficiently address the collective harms associated with the deployment of biometric technologies and artificial intelligence. Another relevant question is whether the initiatives put forward to secure fundamental human rights in the digital realm account for the issues brought on by the deployment of technologies in city spaces. In this special issue, we employ the smart city notion as a point of connection for interdisciplinary research on the human rights implications of the algorithmic, biometric and smart city technologies and the policy responses to them. The articles included in the special issue analyse the latest European regulations as well as soft law, and the policy frameworks that are currently at work in the regions where the GDPR does not apply…(More)”.

Think Bigger: How to Innovate


Book by Sheena Iyengar: “…answers a timeless question with enormous implications for problems of all kinds across the world: “How can I get my best ideas?”

Iyengar provides essential tools to spark creative thinking and help us make our most meaningful choices. She draws from recent advances in neuro- and cognitive sciences to give readers a set of practical steps for coming up with powerful new ideas. Think Bigger offers an innovative evidence-backed method for generating big ideas that Iyengar and her team of researchers developed and refined over the last decade.

For anyone looking to innovate, the black box of creativity is a mystery no longer. Think Bigger upends the myth that big ideas are reserved for a select few. By using this method as a guide to creative thinking, anybody can produce revolutionary ideas…(More)”.

Modernizing philanthropy for the 21st century


Essay by Stefaan G. Verhulst, Lisa T. Moretti, Hannah Chafetz and Alex Fischer: “…How can philanthropies move in a more deliberate yet responsible manner toward using data to advance their goals? The purpose of this article is to propose an overview of existing and potential qualitative and quantitative data innovations within the philanthropic sector. In what follows, we examine four areas where there is a need for innovation in how philanthropy works, and eight pathways for the responsible use of data innovations to address existing shortcomings.

Four areas for innovation

In order to identify potential data-led solutions, we need to begin by understanding current shortcomings. Through our research, we identified four areas within philanthropy that are ripe for data-led innovation:

  • First, there is a need for innovation in the identification of shared questions and overlapping priorities among communities, public service, and philanthropy. The philanthropic sector is well placed to enable a new combination of approaches, products, and processes while still enabling communities to prioritize the issues that matter most.
  • Second, there is a need to improve coordination and transparency across the sector. Even when shared priorities are identified, there often remains a large gap between the imperatives of building common agendas and the ability to act on those agendas in a coordinated and strategic way. New ways to collect and generate cross-sector shared intelligence are needed to better design funding strategies and make difficult trade-off choices.
  • Third, reliance on fixed-project-based funding often means that philanthropists must wait for impact reports to assess results. There is a need to enable iteration and adaptive experimentation to help foster a culture of greater flexibility, agility, learning, and continuous improvement.
  • Lastly, innovations for impact assessments and accountability could help philanthropies better understand how their funding and support have impacted the populations they intend to serve.

Needless to say, data alone cannot address all of these shortcomings. For true innovation, qualitative and quantitative data must be combined with a much wider range of human, institutional, and cultural change. Nonetheless, our research indicates that when used responsibly, data-driven methods and tools do offer pathways for success. We examine some of those pathways in the next section.

Eight pathways for data-driven innovations in philanthropy

The sources of data today available to philanthropic organizations are multifarious, enabled by advancements in digital technologies such as low-cost sensors, mobile devices, apps, wearables, and the increasing number of objects connected to the Internet of Things. The ways in which this data can be deployed are similarly varied. In the below, we examine eight pathways in particular for data-led innovation…(More)”.

Recalibrating assumptions on AI


Essay by Arthur Holland Michel: “Many assumptions about artificial intelligence (AI) have become entrenched despite the lack of evidence to support them. Basing policies on these assumptions is likely to increase the risk of negative impacts for certain demographic groups. These dominant assumptions include claims that AI is ‘intelligent’ and ‘ethical’, that more data means better AI, and that AI development is a ‘race’.

The risks of this approach to AI policymaking are often ignored, while the potential positive impacts of AI tend to be overblown. By illustrating how a more evidence-based, inclusive discourse can improve policy outcomes, this paper makes the case for recalibrating the conversation around AI policymaking…(More)”

The Real Opportunities for Empowering People through Behavioral Science


Essay by Michael Hallsworth: “…There’s much to be gained by broadening out from designing choice architecture with little input from those who use it. But I think we need to change the way we talk about the options available.

Let’s start by noting that attention has focused on three opportunities in particular: nudge plus, self-nudges, and boosts.

Nudge plus is where a prompt to encourage reflection is built into the design and delivery of a nudge (or occurs close to it). People cannot avoid being made aware of the nudge and its purpose, enabling them to decide whether they approve of it or not. While some standard nudges, like commitment devices, already contain an element of self-reflection, a nudge plus must include an “active trigger.”

self-nudge is where someone designs a nudge to influence their own behavior. In other words, they “structure their own decision environments” to make an outcome they desire more likely. An example might be creating a reminder to store snacks in less obvious and accessible places after they are bought.

Boosts emerge from the perspective that many of the heuristics we use to navigate our lives are useful and can be taught. A boost is when someone is helped to develop a skill, based on behavioral science, that will allow them to exercise their own agency and achieve their goals. Boosts aim at building people’s competences to influence their own behavior, whereas nudges try to alter the surrounding context and leave such competences unchanged.

When these ideas are discussed, there is often an underlying sense of “we need to move away from nudging and towards these approaches.” But to frame things this way neglects the crucial question of how empowerment actually happens.   

Right now, there is often a simplistic division between disempowering nudges on one side and enabling nudge plus/self-nudges/boosts on the other. In fact, these labels disguise two real drivers of empowerment that cut across the categories. They are:

  1. How far a person performing the behavior is involved in shaping the initiative itself. They could not be involved at all, involved in co-designing the intervention, or initiating and driving the intervention itself.
  2. The level and nature of any capacity created by the intervention. It may create none (i.e., have no cognitive or motivational effects), it may create awareness (i.e., the ability to reflect on what is happening), or it may build the ability to carry out an action (e.g., a skill).

The figure below shows how the different proposals map against these two drivers.


Source: Hallsworth, M. (2023). A Manifesto for Applying Behavioral Science.

A major point this figure calls attention to is co-design, which uses creative methods “to engage citizens, stakeholders and officials in an iterative process to respond to shared problems.” In other words, the people affected by an issue or change are involved as participants, rather than subjects. This involvement is intended to create more effective, tailored, and appropriate interventions that respond to a broader range of evidence…(More)”.