Africa fell in love with crypto. Now, it’s complicated


Article by Martin K.N Siele: “Chiamaka, a former product manager at a Nigerian cryptocurrency startup, has sworn off digital currencies. The 22-year-old has weathered a layoff and lost savings worth 4,603,500 naira ($9,900) after the collapse of FTX in November 2022. She now works for a corporate finance company in Lagos, earning a salary that is 45% lower than her previous job.

“I used to be bullish on crypto because I believed it could liberate Africans financially,” Chiamaka, who asked to be identified by a pseudonym as she was concerned about breaching her contract with her current employer, told Rest of World. “Instead, it has managed to do the opposite so far … at least to me and a few of my friends.”

Chiamaka is among the tens of millions of Africans who bought into the cryptocurrency frenzy over the last few years. According to one estimate in mid-2022, around 53 million Africans owned crypto — 16.5% of the total global crypto users. Nigeria led with over 22 million users, ranking fourth globally. Blockchain startups and businesses on the continent raised $474 million in 2022, a 429% increase from the previous year, according to the African Blockchain Report. Young African creatives also became major proponents of non-fungible tokens (NFTs), taking inspiration from pop culture and the continent’s history. Several decentralized autonomous organizations (DAOs), touted as the next big thing, emerged across Africa…(More)”.

The Citizens’ Panel proposes 23 recommendations for fair and human-centric virtual worlds in the EU


European Commission: “From 21 to 23 April, the Commission hosted the closing session of the European Citizens’ Panel on Virtual Months in Brussels, which allowed citizens to make recommendations on values and actions to create attractive and fair European virtual worlds.

These recommendations will support the Commission’s work on virtual worlds and the future of the Internet.

After three weekends of deliberations, the panel, composed of around 150 citizens randomly chosen to represent the diversity of the European population, made 23 recommendations on citizens’ expectations for the future, principles and actions to ensure that virtual worlds in the EU are fair and citizen-friendly. These recommendations are structured around eight values and principles: freedom of choice, sustainability, human-centred, health, education, safety and security, transparency and integration.

This new generation of Citizens’ Panels is a key element of the Conference on the Future of Europe, which aims to encourage citizens’ participation in the European Commission’s policy-making process in certain key areas.

The Commission is currently preparing a new initiative on virtual worlds, which will outline Europe’s vision, in line with European digital rights and principles. The upcoming initiative will focus on how to address societal challenges, foster innovation for businesses and pave the way for a transition to Web 4.0.

In addition to this Citizens’ Panel, the Commission has launched a call for input to allow citizens and stakeholders to share their thoughts on the topic. Contributions can be made until 3 May…(More)”.

Speaking in Tongues — Teaching Local Languages to Machines


Report by DIAL: “…Machines learn to talk to people by digesting digital content in languages people speak through a technique called Natural Language Processing (NLP). As things stand, only about 85 of the world’s approximately 7500 languages are represented in the major NLPs — and just 7 languages, with English being the most advanced, comprise the majority of the world’s digital knowledge corpus. Fortunately, many initiatives are underway to fill this knowledge gap. My new mini-report with Digital Impact Alliance (DIAL) highlights a few of them from Serbia, India, Estonia, and Africa.

The examples in the report are just a subset of initiatives on the ground to make digital services accessible to people in their local languages. They are a cause for excitement and hope (tempered by realistic expectations). A few themes across the initiatives include –

  • Despite the excitement and enthusiasm, most of the programs above are still at a very nascent stage — many may fail, and others will require investment and time to succeed. While countries such as India have initiated formal national NLP programs (one that is too early to assess), others such as Serbia have so far taken a more ad hoc approach.
  • Smaller countries like Estonia recognize the need for state intervention as the local population isn’t large enough to attract private sector investment. Countries will need to balance their local, cultural, and political interests against commercial realities as languages become digital or are digitally excluded.
  • Community engagement is an important component of almost all initiatives. India has set up a formal crowdsourcing program; other programs in Africa are experimenting with elements of participatory design and crowd curation.
  • While critics have accused ChatGPT and others of paying contributors from the global south very poorly for their labeling and other content services; it appears that many initiatives in the south are beginning to dabble with payment models to incentivize crowdsourcing and sustain contributions from the ground.
  • The engagement of local populations can ensure that NLP models learn appropriate cultural nuances, and better embody local social and ethical norms…(More)”.

AI translation is jeopardizing Afghan asylum claims


Article by Andrew Deck: “In 2020, Uma Mirkhail got a firsthand demonstration of how damaging a bad translation can be.

A crisis translator specializing in Afghan languages, Mirkhail was working with a Pashto-speaking refugee who had fled Afghanistan. A U.S. court had denied the refugee’s asylum bid because her written application didn’t match the story told in the initial interviews.

In the interviews, the refugee had first maintained that she’d made it through one particular event alone, but the written statement seemed to reference other people with her at the time — a discrepancy large enough for a judge to reject her asylum claim.

After Mirkhail went over the documents, she saw what had gone wrong: An automated translation tool had swapped the “I” pronouns in the woman’s statement to “we.”

Mirkhail works with Respond Crisis Translation, a coalition of over 2,500 translators that provides interpretation and translation services for migrants and asylum seekers around the world. She told Rest of World this kind of small mistake can be life-changing for a refugee. In the wake of the Taliban’s return to power in Afghanistan, there is an urgent demand for crisis translators working in languages such as Pashto and Dari. Working alongside refugees, these translators can help clients navigate complex immigration systems, including drafting immigration forms such as asylum applications. But a new generation of machine translation tools is changing the landscape of this field — and adding a new set of risks for refugees…(More)”.

The Coming Age of AI-Powered Propaganda


Essay by Josh A. Goldstein and Girish Sastry: “In the seven years since Russian operatives interfered in the 2016 U.S. presidential election, in part by posing as Americans in thousands of fake social media accounts, another technology with the potential to accelerate the spread of propaganda has taken center stage: artificial intelligence, or AI. Much of the concern has focused on the risks of audio and visual “deepfakes,” which use AI to invent images or events that did not actually occur. But another AI capability is just as worrisome. Researchers have warned for years that generative AI systems trained to produce original language—“language models,” for short—could be used by U.S. adversaries to mount influence operations. And now, these models appear to be on the cusp of enabling users to generate a near limitless supply of original text with limited human effort. This could improve the ability of propagandists to persuade unwitting voters, overwhelm online information environments, and personalize phishing emails. The danger is twofold: not only could language models sway beliefs; they could also corrode public trust in the information people rely on to form judgments and make decisions.

The progress of generative AI research has outpaced expectations. Last year, language models were used to generate functional proteins, beat human players in strategy games requiring dialogue, and create online assistants. Conversational language models have come into wide use almost overnight: more than 100 million people used OpenAI’s ChatGPT program in the first two months after it was launched, in December 2022, and millions more have likely used the AI tools that Google and Microsoft introduced soon thereafter. As a result, risks that seemed theoretical only a few years ago now appear increasingly realistic. For example, the AI-powered “chatbot” that powers Microsoft’s Bing search engine has shown itself to be capable of attempting to manipulate users—and even threatening them.

As generative AI tools sweep the world, it is hard to imagine that propagandists will not make use of them to lie and mislead…(More)”.

Harnessing Data Innovation for Migration Policy: A Handbook for Practitioners


Report by IOM: “The Practitioners’ Handbook provides first-hand insights into why and how non-traditional data sources can contribute to better understanding migration-related phenomena. The Handbook aims to (a) bridge the practical and technical aspects of using data innovations in migration statistics, (a) demonstrate the added value of using new data sources and innovative methodologies to analyse key migration topics that may be hard to fully grasp using traditional data sources, and (c) identify good practices in addressing issues of data access and collaboration with multiple stakeholders (including the private sector), ethical standards, and security and data protection issues…(More)” See also Big Data for Migration Alliance.

What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

Including the underrepresented


Paper by FIDE: “Deliberative democracy is based on the premise that all voices matter and that we can equally participate in decision-making. However, structural inequalities might prevent certain groups from being recruited for deliberation, skewing the process towards the socially privileged. Those structural inequalities are also present in the deliberation room, which can lead to unconscious (or conscious) biases that hinder certain voices while amplifying others. This causes particular perspectives to influence decision-making unequally.

This paper presents different methods and strategies applied in previous processes to increase the inclusion of underrepresented groups. We distinguish strategies for the two critical phases of the deliberative process: recruitment and deliberation…(More)”.

The Surveillance Ad Model Is Toxic — Let’s Not Install Something Worse


Article by Elizabeth M. Renieris: “At this stage, law and policy makerscivil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.

In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.

But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal…(More)”.

Innovating Democracy? The Means and Ends of Citizen Participation in Latin America


Book by Thamy Pogrebinschi: “Since democratization, Latin America has experienced a surge in new forms of citizen participation. Yet there is still little comparative knowledge on these so-called democratic innovations. This Element seeks to fill this gap. Drawing on a new dataset with 3,744 cases from 18 countries between 1990 and 2020, it presents the first large-N cross-country study of democratic innovations to date. It also introduces a typology of twenty kinds of democratic innovations, which are based on four means of participation, namely deliberation, citizen representation, digital engagement, and direct voting. Adopting a pragmatist, problem-driven approach, this Element claims that democratic innovations seek to enhance democracy by addressing public problems through combinations of those four means of participation in pursuit of one or more of five ends of innovations, namely accountability, responsiveness, rule of law, social equality, and political inclusion…(More)”.