Participatory Digital Futures: How digital transformation can be made good for all


Paper by Mark Findlay and Sharanya Shanmugam: “Digital transformation through the widespread use of Artificial Intelligence (AI)-assisted technology and big data usage is assumed to usher in socio-economic benefits. Notions of ‘digital readiness’ speak to the inevitability of a universalised digital transformation. But the common approach of exporting digital capacities across societies and markets—digital transformation is good for you all—is top-down and paternalist. It conjures the image of some common/average citizen or worker being able and willing to transform into a digitally competent economic unit. Such a top-down approach to digital transformation can ignore, and even underplay, important demographic differences across communities when it comes to related issues such as digital literacy, digital familiarity, digital readiness, access to technology, and consent for creating digital dependencies. These differences usually grow from structural vulnerabilities such as old age, low levels of education, and socio-economic vulnerabilities like poverty and restricted access to knowledge or technical opportunities. Above all, certain segments of a community, already disadvantaged or less able to manage change, could be further measurably disadvantaged by such a universal digital push.

In this article, through vignettes from the United Kingdom and Singapore’s experience, we highlight how digital transformation can be made more participatory for users affected by digital initiatives. In the process, we introduce the idea of Living Digital Transformation (LDT) as a more bottom-up and user-centric alternative that includes those from vulnerable communities, and therefore, can improve the benefits from digital transformation for all…(More)”.

Speaking in Tongues — Teaching Local Languages to Machines


Report by DIAL: “…Machines learn to talk to people by digesting digital content in languages people speak through a technique called Natural Language Processing (NLP). As things stand, only about 85 of the world’s approximately 7500 languages are represented in the major NLPs — and just 7 languages, with English being the most advanced, comprise the majority of the world’s digital knowledge corpus. Fortunately, many initiatives are underway to fill this knowledge gap. My new mini-report with Digital Impact Alliance (DIAL) highlights a few of them from Serbia, India, Estonia, and Africa.

The examples in the report are just a subset of initiatives on the ground to make digital services accessible to people in their local languages. They are a cause for excitement and hope (tempered by realistic expectations). A few themes across the initiatives include –

  • Despite the excitement and enthusiasm, most of the programs above are still at a very nascent stage — many may fail, and others will require investment and time to succeed. While countries such as India have initiated formal national NLP programs (one that is too early to assess), others such as Serbia have so far taken a more ad hoc approach.
  • Smaller countries like Estonia recognize the need for state intervention as the local population isn’t large enough to attract private sector investment. Countries will need to balance their local, cultural, and political interests against commercial realities as languages become digital or are digitally excluded.
  • Community engagement is an important component of almost all initiatives. India has set up a formal crowdsourcing program; other programs in Africa are experimenting with elements of participatory design and crowd curation.
  • While critics have accused ChatGPT and others of paying contributors from the global south very poorly for their labeling and other content services; it appears that many initiatives in the south are beginning to dabble with payment models to incentivize crowdsourcing and sustain contributions from the ground.
  • The engagement of local populations can ensure that NLP models learn appropriate cultural nuances, and better embody local social and ethical norms…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

Harnessing Data Innovation for Migration Policy: A Handbook for Practitioners


Report by IOM: “The Practitioners’ Handbook provides first-hand insights into why and how non-traditional data sources can contribute to better understanding migration-related phenomena. The Handbook aims to (a) bridge the practical and technical aspects of using data innovations in migration statistics, (a) demonstrate the added value of using new data sources and innovative methodologies to analyse key migration topics that may be hard to fully grasp using traditional data sources, and (c) identify good practices in addressing issues of data access and collaboration with multiple stakeholders (including the private sector), ethical standards, and security and data protection issues…(More)” See also Big Data for Migration Alliance.

What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

You Can’t Regulate What You Don’t Understand


Article by Tim O’Reilly: “The world changed on November 30, 2022 as surely as it did on August 12, 1908 when the first Model T left the Ford assembly line. That was the date when OpenAI released ChatGPT, the day that AI emerged from research labs into an unsuspecting world. Within two months, ChatGPT had over a hundred million users—faster adoption than any technology in history.

The hand wringing soon began…

All of these efforts reflect the general consensus that regulations should address issues like data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI safety and responsibility guidelines cite those same goals, but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? They write:

“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

But whose human values? Those of the benevolent idealists that most AI critics aspire to be? Those of a public company bound to put shareholder value ahead of customers, suppliers, and society as a whole? Those of criminals or rogue states bent on causing harm to others? Those of someone well meaning who, like Aladdin, expresses an ill-considered wish to an all-powerful AI genie?

There is no simple way to solve the alignment problem. But alignment will be impossible without robust institutions for disclosure and auditing. If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved. That is a crucial first step, and we should take it immediately. These systems are still very much under human control. For now, at least, they do what they are told, and when the results don’t match expectations, their training is quickly improved. What we need to know is what they are being told.

What should be disclosed? There is an important lesson for both companies and regulators in the rules by which corporations—which science-fiction writer Charlie Stross has memorably called “slow AIs”—are regulated. One way we hold companies accountable is by requiring them to share their financial results compliant with Generally Accepted Accounting Principles or the International Financial Reporting Standards. If every company had a different way of reporting its finances, it would be impossible to regulate them…(More)”

Future-proofing the city: A human rights-based approach to governing algorithmic, biometric and smart city technologies


Introduction to Special Issue by Alina Wernick, and Anna Artyushina: “While the GDPR and other EU laws seek to mitigate a range of potential harms associated with smart cities, the compliance with and enforceability of these regulations remain an issue. In addition, these proposed regulations do not sufficiently address the collective harms associated with the deployment of biometric technologies and artificial intelligence. Another relevant question is whether the initiatives put forward to secure fundamental human rights in the digital realm account for the issues brought on by the deployment of technologies in city spaces. In this special issue, we employ the smart city notion as a point of connection for interdisciplinary research on the human rights implications of the algorithmic, biometric and smart city technologies and the policy responses to them. The articles included in the special issue analyse the latest European regulations as well as soft law, and the policy frameworks that are currently at work in the regions where the GDPR does not apply…(More)”.

Think Bigger: How to Innovate


Book by Sheena Iyengar: “…answers a timeless question with enormous implications for problems of all kinds across the world: “How can I get my best ideas?”

Iyengar provides essential tools to spark creative thinking and help us make our most meaningful choices. She draws from recent advances in neuro- and cognitive sciences to give readers a set of practical steps for coming up with powerful new ideas. Think Bigger offers an innovative evidence-backed method for generating big ideas that Iyengar and her team of researchers developed and refined over the last decade.

For anyone looking to innovate, the black box of creativity is a mystery no longer. Think Bigger upends the myth that big ideas are reserved for a select few. By using this method as a guide to creative thinking, anybody can produce revolutionary ideas…(More)”.

Modernizing philanthropy for the 21st century


Essay by Stefaan G. Verhulst, Lisa T. Moretti, Hannah Chafetz and Alex Fischer: “…How can philanthropies move in a more deliberate yet responsible manner toward using data to advance their goals? The purpose of this article is to propose an overview of existing and potential qualitative and quantitative data innovations within the philanthropic sector. In what follows, we examine four areas where there is a need for innovation in how philanthropy works, and eight pathways for the responsible use of data innovations to address existing shortcomings.

Four areas for innovation

In order to identify potential data-led solutions, we need to begin by understanding current shortcomings. Through our research, we identified four areas within philanthropy that are ripe for data-led innovation:

  • First, there is a need for innovation in the identification of shared questions and overlapping priorities among communities, public service, and philanthropy. The philanthropic sector is well placed to enable a new combination of approaches, products, and processes while still enabling communities to prioritize the issues that matter most.
  • Second, there is a need to improve coordination and transparency across the sector. Even when shared priorities are identified, there often remains a large gap between the imperatives of building common agendas and the ability to act on those agendas in a coordinated and strategic way. New ways to collect and generate cross-sector shared intelligence are needed to better design funding strategies and make difficult trade-off choices.
  • Third, reliance on fixed-project-based funding often means that philanthropists must wait for impact reports to assess results. There is a need to enable iteration and adaptive experimentation to help foster a culture of greater flexibility, agility, learning, and continuous improvement.
  • Lastly, innovations for impact assessments and accountability could help philanthropies better understand how their funding and support have impacted the populations they intend to serve.

Needless to say, data alone cannot address all of these shortcomings. For true innovation, qualitative and quantitative data must be combined with a much wider range of human, institutional, and cultural change. Nonetheless, our research indicates that when used responsibly, data-driven methods and tools do offer pathways for success. We examine some of those pathways in the next section.

Eight pathways for data-driven innovations in philanthropy

The sources of data today available to philanthropic organizations are multifarious, enabled by advancements in digital technologies such as low-cost sensors, mobile devices, apps, wearables, and the increasing number of objects connected to the Internet of Things. The ways in which this data can be deployed are similarly varied. In the below, we examine eight pathways in particular for data-led innovation…(More)”.

Recalibrating assumptions on AI


Essay by Arthur Holland Michel: “Many assumptions about artificial intelligence (AI) have become entrenched despite the lack of evidence to support them. Basing policies on these assumptions is likely to increase the risk of negative impacts for certain demographic groups. These dominant assumptions include claims that AI is ‘intelligent’ and ‘ethical’, that more data means better AI, and that AI development is a ‘race’.

The risks of this approach to AI policymaking are often ignored, while the potential positive impacts of AI tend to be overblown. By illustrating how a more evidence-based, inclusive discourse can improve policy outcomes, this paper makes the case for recalibrating the conversation around AI policymaking…(More)”