The AI project pushing local languages to replace French in Mali’s schools


Article by Annie Risemberg and Damilare Dosunmu: “For the past six months,Alou Dembele, a27-year-oldengineer and teacher, has spent his afternoons reading storybooks with children in the courtyard of a community school in Mali’s capital city, Bamako. The books are written in Bambara — Mali’s most widely spoken language — and include colorful pictures and stories based on local culture. Dembele has over 100 Bambara books to pick from — an unimaginable educational resource just a year ago.

From 1960 to 2023, French was Mali’s official language. But in June last year, the military government replaced it in favor of 13 local languages, creating a desperate need for new educational materials.

Artificial intelligence came to the rescue: RobotsMali, a government-backed initiative, used tools like ChatGPT, Google Translate, and free-to-use image-maker Playgroundto create a pool of 107 books in Bambara in less than a year. Volunteer teachers, like Dembele, distribute them through after-school classes. Within a year, the books have reached over 300 elementary school kids, according to RobotsMali’s co-founder, Michael Leventhal. They are not only helping bridge the gap created after French was dropped but could also be effective in helping children learn better, experts told Rest of World…(More)”.

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting


Article by Andy Tattersall: “…Understanding the often mysterious processes of how research is picked up and used across different sections of the media is therefore important. To do this we looked at a sample of research that included at least one author from the University of Sheffield that had been cited in either national or local media. We obtained the data from Altmetric.com to explore whether the news story included supporting information that linked readers to the research and those behind it. These were links to any of the authors, their institution, the journal or the research funder. We also investigated how much of this research was available via open access.

National news websites were more likely to include a link to the research paper underpinning the news story.

The contrasts between national and local samples were notable. National news websites were more likely to include a link to the research paper underpinning the news story. National research coverage stories were also more organic. They were more likely to be original texts written by journalists who are credited as authors. This is reflected in more idiosyncratic citation practices. Guardian writers, such as Henry Nicholls and George Monbiot, regularly provided a proper academic citation to the research at the end of their articles. This should be standard practice, but it does require those writing press releases to include formatted citations with a link as a basic first step. 

Local news coverage followed a different pattern, which is likely due to their use of news agencies to provide stories. Much local news coverage relies on copying and pasting subscription content provided by the UK’s national news agency, PA News. Anyone who has visited their local news website in recent years will know that they are full of pop-ups and hyperlinks to adverts and commercial websites. As a result of this business model, local news stories contain no or very few links to the research and those behind the work. Whether any of this practice and the lack of information stems from academic institution and publisher press releases is debatable. 

“Much local news coverage relies on copying and pasting subscription content provided by the UK’s national news agency, PA News.

Further, we found that local coverage of research is often syndicated across multiple news sites, belonging to a few publishers. Consequently if a syndication republishes the same information across their news platforms, it replicates bad practice. A solution to this is to include a readily formatted citation with a link, preferably to an open access version, at the foot of the story. This allows local media to continue linking to third party sites whilst providing an option to explore the actual research paper, especially if that paper is open access…(More)”.

Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?


Paper by Alice Xiang: “Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? These questions, however, are a red herring. They ignore what is most interesting and important about AI ethics: AI is a mirror. If a person standing in front of a mirror asked you, “Who is more beautiful, me or the person in the mirror?” the question would seem ridiculous. Sure, depending on the angle, lighting, and personal preferences of the beholder, the person or their reflection might appear more beautiful, but the question is moot. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This essay discusses the challenges to developing fairer AI, and how they stem from this reflective property…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust


Article by David Gilbert: “The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”

The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”

Arya is not the only Gab chatbot to disseminate these beliefs. Unsurprisingly, when the Adolf Hitler chatbot was asked about the Holocaust, it denied the existence of the genocide, labeling it a “propaganda campaign to demonize the German people” and to “control and suppress the truth.”..(More)”.

Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World


Paper by Jennifer King, Caroline Meinhardt: “In this paper, we present a series of arguments and predictions about how existing and future privacy and data protection regulation will impact the development and deployment of AI systems.

➜ Data is the foundation of all AI systems. Going forward, AI development will continue to increase developers’ hunger for training data, fueling an even greater race for data acquisition than we have already seen in past decades.

➜ Largely unrestrained data collection poses unique risks to privacy that extend beyond the individual level—they aggregate to pose societal-level harms that cannot be addressed through the exercise of individual data rights alone.

➜ While existing and proposed privacy legislation, grounded in the globally accepted Fair Information Practices (FIPs), implicitly regulate AI development, they are not sufficient to address the data acquisition race as well as the resulting individual and systemic privacy harms.

➜ Even legislation that contains explicit provisions on algorithmic decision-making and other forms of AI does not provide the data governance measures needed to meaningfully regulate the data used in AI systems.

➜ We present three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:

1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.

2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.

3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences…(More)”.

Research Project Management and Leadership


Book by P. Alison Paprica: “The project management approaches, which are used by millions of people internationally, are often too detailed or constraining to be applied to research. In this handbook, project management expert P. Alison Paprica presents guidance specifically developed to help with the planning, management, and leadership of research.

Research Project Management and Leadership provides simplified versions of globally utilized project management tools, such as the work breakdown structure to visualize scope, and offers guidance on processes, including a five-step process to identify and respond to risks. The complementary leadership guidance in the handbook is presented in the form of interview write-ups with 19 Canadian and international research leaders, each of whom describes a situation where leadership skills were important, how they responded, and what they learned. The accessible language and practical guidance in the handbook make it a valuable resource for everyone from principal investigators leading multimillion-dollar projects to graduate students planning their thesis research. The book aims to help readers understand which management and leadership tools, processes, and practices are helpful in different circumstances, and how to implement them in research settings…(More)”.

i.AI Consultation Analyser


New Tool by AI.Gov.UK: “Public consultations are a critical part of the process of making laws, but analysing consultation responses is complex and very time consuming. Working with the No10 data science team (10DS), the Incubator for Artificial Intelligence (i.AI) is developing a tool to make the process of analysing public responses to government consultations faster and fairer.

The Analyser uses AI and data science techniques to automatically extract patterns and themes from the responses, and turns them into dashboards for policy makers.

The goal is for computers to do what they are best at: finding patterns and analysing large amounts of data. That means humans are free to do the work of understanding those patterns.

Screenshot showing donut chart for those who agree or disagree, and a bar chart showing popularity of prevalent themes

Government runs 700-800 consultations a year on matters of importance to the public. Some are very small, but a large consultation might attract hundreds of thousands of written responses.

A consultation attracting 30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report. And it’s not unheard of to get double that number

If we can apply automation in a way that is fair, effective and accountable, we could save most of that £80m…(More)”

Participatory democracy in the EU should be strengthened with a Standing Citizens’ Assembly


Article by James Mackay and Kalypso Nicolaïdis: “EU citizens have multiple participatory instruments at their disposal, from the right to petition the European Parliament (EP) to the European Citizen’s Initiative (ECI), from the European Commission’s public online consultation and Citizens’ Dialogues to the role of the European Ombudsman as an advocate for the public vis-à-vis the EU institutions.

While these mechanisms are broadly welcome they have – unfortunately – remained too timid and largely ineffective in bolstering bottom-up participation. They tend to involve experts and organised interest groups rather than ordinary citizens. They don’t encourage debates on non-experts’ policy preferences and are executed too often at the discretion of the political elites to  justify pre-existing policy decisions.

In short, they feel more like consultative mechanisms than significant democratic innovations. That’s why the EU should be bold and demonstrate its democratic leadership by institutionalising its newly-created Citizens’ Panels into a Standing Citizens’ Assembly with rotating membership chosen by lot and renewed on a regular basis…(More)”.

Blockchain and public service delivery: a lifetime cross-referenced model for e-government


Paper by Maxat Kassen: “The article presents the results of field studies, analysing the perspectives of blockchain developers on decentralised service delivery and elaborating on unique algorithms for lifetime ledgers to reliably and safely record e-government transactions in an intrinsically cross-referenced manner. New interesting technological niches of service delivery and emerging models of related data management in the industry were proposed and further elaborated such as the generation of unique lifetime personal data profiles, blockchain-driven cross-referencing of e-government metadata, parallel maintenance of serviceable ledgers for data identifiers and phenomena of blockchain ‘black holes’ to ensure reliable protection of important public, corporate and civic information…(More)”.