The AI project pushing local languages to replace French in Mali’s schools


Article by Annie Risemberg and Damilare Dosunmu: “For the past six months,Alou Dembele, a27-year-oldengineer and teacher, has spent his afternoons reading storybooks with children in the courtyard of a community school in Mali’s capital city, Bamako. The books are written in Bambara — Mali’s most widely spoken language — and include colorful pictures and stories based on local culture. Dembele has over 100 Bambara books to pick from — an unimaginable educational resource just a year ago.

From 1960 to 2023, French was Mali’s official language. But in June last year, the military government replaced it in favor of 13 local languages, creating a desperate need for new educational materials.

Artificial intelligence came to the rescue: RobotsMali, a government-backed initiative, used tools like ChatGPT, Google Translate, and free-to-use image-maker Playgroundto create a pool of 107 books in Bambara in less than a year. Volunteer teachers, like Dembele, distribute them through after-school classes. Within a year, the books have reached over 300 elementary school kids, according to RobotsMali’s co-founder, Michael Leventhal. They are not only helping bridge the gap created after French was dropped but could also be effective in helping children learn better, experts told Rest of World…(More)”.

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?


Paper by Alice Xiang: “Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust


Article by David Gilbert: “The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”

The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”

Arya is not the only Gab chatbot to disseminate these beliefs. Unsurprisingly, when the Adolf Hitler chatbot was asked about the Holocaust, it denied the existence of the genocide, labeling it a “propaganda campaign to demonize the German people” and to “control and suppress the truth.”..(More)”.

How Big Tech let down Navalny


Article by Ellery Roberts Biddle: “As if the world needed another reminder of the brutality of Vladimir Putin’s Russia, last Friday we learned of the untimely death of Alexei Navalny. I don’t know if he ever used the term, but Navalny was what Chinese bloggers might have called a true “netizen” — a person who used the internet to live out democratic values and systems that didn’t exist in their country.

Navalny’s work with the Anti-Corruption Foundation reached millions using major platforms like YouTube and LiveJournal. But they built plenty of their own technology too. One of their most famous innovations was “Smart Voting,” a system that could estimate which opposition candidates were most likely to beat out the ruling party in a given election. The strategy wasn’t to support a specific opposition party or candidate — it was simply to unseat members of the ruling party, United Russia. In regional races in 2020, it was credited with causing United Russia to lose its majority in state legislatures in Novosibirsk, Tambov and Tomsk.

The Smart Voting system was pretty simple — just before casting a ballot, any voter could check the website or the app to decide where to throw their support. But on the eve of national parliamentary elections in September 2021, Smart Voting suddenly vanished from the app stores for both Google and Apple. 

After a Moscow court banned Navalny’s organization for being “extremist,” Russia’s internet regulator demanded that both Apple and Google remove Smart Voting from their app stores. The companies bowed to the Kremlin and complied. YouTube blocked select Navalny videos in Russia and Google, its parent company, even blocked some public Google Docs that the Navalny team published to promote names of alternative candidates in the election. 

We will never know whether or not Navalny’s innovative use of technology to stand up to the dictator would have worked. But Silicon Valley’s decision to side with Putin was an important part of why Navalny’s plan failed…(More)”.

The US Is Jeopardizing the Open Internet


Article by Natalie Dunleavy Campbell & Stan Adams: “Last October, the United States Trade Representative (USTR) abandoned its longstanding demand for World Trade Organization provisions to protect cross-border data flows, prevent forced data localization, safeguard source codes, and prohibit countries from discriminating against digital products based on nationality. It was a shocking shift: one that jeopardizes the very survival of the open internet, with all the knowledge-sharing, global collaboration, and cross-border commerce that it enables.

The USTR says that the change was necessary because of a mistaken belief that trade provisions could hinder the ability of US Congress to respond to calls for regulation of Big Tech firms and artificial intelligence. But trade agreements already include exceptions for legitimate public-policy concerns, and Congress itself has produced research showing that trade deals cannot impede its policy aspirations. Simply put, the US – as with other countries involved in WTO deals – can regulate its digital sector without abandoning its critical role as a champion of the open internet.

The potential consequences of America’s policy shift are as far-reaching as they are dangerous. Fear of damaging trade ties with the US has long deterred other actors from imposing national borders on the internet. Now, those who have heard the siren song of supposed “digital sovereignty” as a means to ensure their laws are obeyed in the digital realm have less reason to resist it. The more digital walls come up, the less the walled-off portions resemble the internet.

Several countries are already trying to replicate China’s heavy-handed approach to data governance. Rwanda’s data-protection law, for instance, forces companies to store data within its border unless otherwise permitted by its cybersecurity regulator – making personal data vulnerable to authorities known to use data from private messages to prosecute dissidents. At the same time, a growing number of democratic countries are considering regulations that, without strong safeguards for cross-border data flows, could have a similar effect of disrupting access to a truly open internet…(More)”.

Data as a catalyst for philanthropy


Article by Stefaan Verhulst: “…In what follows, we offer five thoughts on how to advance Data Driven Philanthropy. These are operational strategies, specific steps that philanthropic organisations can take in order to harness the potential of data for the public good. At its broadest level, then, this article is about data stewardship in the 21st century. We seek to define how philanthropic organisations can be responsible custodians of data assets, both theirs and those of society at large. Fulfilling this role of data stewardship is a critical mission for the philanthropic sector and one of the most important roles it can play in helping to ensure that our ongoing process of digital transformation is more fair, inclusive, and aligned with the broader public interest…(More)”.

How tracking animal movement may save the planet


Article by Matthew Ponsford: “Researchers have been dreaming of an Internet of Animals. They’re getting closer to monitoring 100,000 creatures—and revealing hidden facets of our shared world….There was something strange about the way the sharks were moving between the islands of the Bahamas.

Tiger sharks tend to hug the shoreline, explains marine biologist Austin Gallagher, but when he began tagging the 1,000-pound animals with satellite transmitters in 2016, he discovered that these predators turned away from it, toward two ancient underwater hills made of sand and coral fragments that stretch out 300 miles toward Cuba. They were spending a lot of time “crisscrossing, making highly tortuous, convoluted movements” to be near them, Gallagher says. 

It wasn’t immediately clear what attracted sharks to the area: while satellite images clearly showed the subsea terrain, they didn’t pick up anything out of the ordinary. It was only when Gallagher and his colleagues attached 360-degree cameras to the animals that they were able to confirm what they were so drawn to: vast, previously unseen seagrass meadows—a biodiverse habitat that offered a smorgasbord of prey.   

The discovery did more than solve a minor mystery of animal behavior. Using the data they gathered from the sharks, the researchers were able to map an expanse of seagrass stretching across 93,000 square kilometers of Caribbean seabed—extending the total known global seagrass coverage by more than 40%, according to a study Gallagher’s team published in 2022. This revelation could have huge implications for efforts to protect threatened marine ecosystems—seagrass meadows are a nursery for one-fifth of key fish stocks and habitats for endangered marine species—and also for all of us above the waves, as seagrasses can capture carbon up to 35 times faster than tropical rainforests. 

Animals have long been able to offer unique insights about the natural world around us, acting as organic sensors picking up phenomena that remain invisible to humans. More than 100 years ago, leeches signaled storms ahead by slithering out of the water; canaries warned of looming catastrophe in coal mines until the 1980s; and mollusks that close when exposed to toxic substances are still used to trigger alarms in municipal water systems in Minneapolis and Poland…(More)”.

Defending democracy: The threat to the public sphere from social media


Book Review by Mark Hannam: “Habermas is a blockhead. It is simply impossible to tell what kind of damage he is still going to cause in the future”, wrote Karl Popper in 1969. The following year he added: “Most of what he says seems to me trivial; the rest seems to me mistaken”. Five decades later these Popperian conjectures have been roundly refuted. Now in his mid-nineties, Jürgen Habermas is one of the pre-eminent philosophers and public intellectuals of our time. In Germany his generation enjoyed the mercy of being born too late. In 2004, in a speech given on receipt of the Kyoto prize in arts and philosophy, he observed that “we did not have to answer for choosing the wrong side and for political errors and their dire consequences”. He came to maturity in a society that he judged complacent and insufficiently distanced from its recent past. This experience sets the context for his academic work and political interventions.

Polity has recently published two new books by Habermas, both translated by Ciaran Cronin, providing English readers access to the latest iterations of his distinctive themes and methods. He defends a capacious concept of human reason, a collaborative learning process that operates through discussions in which participants appeal only to the force of the better argument. Different kinds of discussion – about scientific facts, moral norms or aesthetic judgements – employ different standards of justification, so what counts as a valid reason depends on context, but all progress, regardless of the field, relies on our conversations following the path along which reason leads us. Habermas’s principal claim is that human reason, appropriately deployed, retains its liberating potential for the species.

His first book, The Structural Transformation of the Public Sphere (1962), traced the emergence in the eighteenth century of the public sphere. This was a functionally distinct social space, located between the privacy of civil society and the formal offices of the modern state, where citizens could engage in processes of democratic deliberation. Habermas drew attention to a range of contemporary phenomena, including the organization of opinion by political parties and the development of mass media funded by advertising, that have disrupted the possibility of widespread, well-informed political debate. Modern democracy, he argued, was increasingly characterized by the technocratic organization of interests, rather than by the open discussion of principles and values…(More)”.

Are Evidence-Based Medicine and Public Health Incompatible?


Essay by Michael Schulson: “It’s a familiar pandemic story: In September 2020, Angela McLean and John Edmunds found themselves sitting in the same Zoom meeting, listening to a discussion they didn’t like.

At some point during the meeting, McLean — professor of mathematical biology at the Oxford University, dame commander of the Order of the British Empire, fellow of the Royal Society of London, and then-chief scientific adviser to the United Kingdom’s Ministry of Defense — sent Edmunds a message on WhatsApp.

“Who is this fuckwitt?” she asked.

The message was evidently referring to Carl Heneghan, director of the Center for Evidence-Based Medicine at Oxford. He was on Zoom that day, along with McLean and Edmunds and two other experts, to advise the British prime minister on the Covid-19 pandemic.

Their disagreement — recently made public as part of a British government inquiry into the Covid-19 response — is one small chapter in a long-running clash between two schools of thought within the world of health care.

McLean and Edmunds are experts in infectious disease modeling; they build elaborate simulations of pandemics, which they use to predict how infections will spread and how best to slow them down. Often, during the Covid-19 pandemic, such models were used alongside other forms of evidence to urge more restrictions to slow the spread of the disease. Heneghan, meanwhile, is a prominent figure in the world of evidence-based medicine, or EBM. The movement aims to help doctors draw on the best available evidence when making decisions and advising patients. Over the past 30 years, EBM has transformed the practice of medicine worldwide.

Whether it can transform the practice of public health — which focuses not on individuals, but on keeping the broader community healthy — is a thornier question…(More)”.