AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust


Article by David Gilbert: “The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”

The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”

Arya is not the only Gab chatbot to disseminate these beliefs. Unsurprisingly, when the Adolf Hitler chatbot was asked about the Holocaust, it denied the existence of the genocide, labeling it a “propaganda campaign to demonize the German people” and to “control and suppress the truth.”..(More)”.

Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World


Paper by Jennifer King, Caroline Meinhardt: “In this paper, we present a series of arguments and predictions about how existing and future privacy and data protection regulation will impact the development and deployment of AI systems.

➜ Data is the foundation of all AI systems. Going forward, AI development will continue to increase developers’ hunger for training data, fueling an even greater race for data acquisition than we have already seen in past decades.

➜ Largely unrestrained data collection poses unique risks to privacy that extend beyond the individual level—they aggregate to pose societal-level harms that cannot be addressed through the exercise of individual data rights alone.

➜ While existing and proposed privacy legislation, grounded in the globally accepted Fair Information Practices (FIPs), implicitly regulate AI development, they are not sufficient to address the data acquisition race as well as the resulting individual and systemic privacy harms.

➜ Even legislation that contains explicit provisions on algorithmic decision-making and other forms of AI does not provide the data governance measures needed to meaningfully regulate the data used in AI systems.

➜ We present three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:

1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.

2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.

3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences…(More)”.

i.AI Consultation Analyser


New Tool by AI.Gov.UK: “Public consultations are a critical part of the process of making laws, but analysing consultation responses is complex and very time consuming. Working with the No10 data science team (10DS), the Incubator for Artificial Intelligence (i.AI) is developing a tool to make the process of analysing public responses to government consultations faster and fairer.

The Analyser uses AI and data science techniques to automatically extract patterns and themes from the responses, and turns them into dashboards for policy makers.

The goal is for computers to do what they are best at: finding patterns and analysing large amounts of data. That means humans are free to do the work of understanding those patterns.

Screenshot showing donut chart for those who agree or disagree, and a bar chart showing popularity of prevalent themes

Government runs 700-800 consultations a year on matters of importance to the public. Some are very small, but a large consultation might attract hundreds of thousands of written responses.

A consultation attracting 30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report. And it’s not unheard of to get double that number

If we can apply automation in a way that is fair, effective and accountable, we could save most of that £80m…(More)”

Participatory democracy in the EU should be strengthened with a Standing Citizens’ Assembly


Article by James Mackay and Kalypso Nicolaïdis: “EU citizens have multiple participatory instruments at their disposal, from the right to petition the European Parliament (EP) to the European Citizen’s Initiative (ECI), from the European Commission’s public online consultation and Citizens’ Dialogues to the role of the European Ombudsman as an advocate for the public vis-à-vis the EU institutions.

While these mechanisms are broadly welcome they have – unfortunately – remained too timid and largely ineffective in bolstering bottom-up participation. They tend to involve experts and organised interest groups rather than ordinary citizens. They don’t encourage debates on non-experts’ policy preferences and are executed too often at the discretion of the political elites to  justify pre-existing policy decisions.

In short, they feel more like consultative mechanisms than significant democratic innovations. That’s why the EU should be bold and demonstrate its democratic leadership by institutionalising its newly-created Citizens’ Panels into a Standing Citizens’ Assembly with rotating membership chosen by lot and renewed on a regular basis…(More)”.

How Big Tech let down Navalny


Article by Ellery Roberts Biddle: “As if the world needed another reminder of the brutality of Vladimir Putin’s Russia, last Friday we learned of the untimely death of Alexei Navalny. I don’t know if he ever used the term, but Navalny was what Chinese bloggers might have called a true “netizen” — a person who used the internet to live out democratic values and systems that didn’t exist in their country.

Navalny’s work with the Anti-Corruption Foundation reached millions using major platforms like YouTube and LiveJournal. But they built plenty of their own technology too. One of their most famous innovations was “Smart Voting,” a system that could estimate which opposition candidates were most likely to beat out the ruling party in a given election. The strategy wasn’t to support a specific opposition party or candidate — it was simply to unseat members of the ruling party, United Russia. In regional races in 2020, it was credited with causing United Russia to lose its majority in state legislatures in Novosibirsk, Tambov and Tomsk.

The Smart Voting system was pretty simple — just before casting a ballot, any voter could check the website or the app to decide where to throw their support. But on the eve of national parliamentary elections in September 2021, Smart Voting suddenly vanished from the app stores for both Google and Apple. 

After a Moscow court banned Navalny’s organization for being “extremist,” Russia’s internet regulator demanded that both Apple and Google remove Smart Voting from their app stores. The companies bowed to the Kremlin and complied. YouTube blocked select Navalny videos in Russia and Google, its parent company, even blocked some public Google Docs that the Navalny team published to promote names of alternative candidates in the election. 

We will never know whether or not Navalny’s innovative use of technology to stand up to the dictator would have worked. But Silicon Valley’s decision to side with Putin was an important part of why Navalny’s plan failed…(More)”.

The U.S. Census Is Wrong on Purpose


Blog by David Friedman: “This is a story about data manipulation. But it begins in a small Nebraska town called Monowi that has only one resident, 90 year old Elsie Eiler.

The sign says “Monowi 1,” from Google Street View.

There used to be more people in Monowi. But little by little, the other residents of Monowi left or died. That’s what happened to Elsie’s own family — her children grew up and moved out and her husband passed away in 2004, leaving her as the sole resident. Now she votes for herself for Mayor, and pays herself taxes. Her husband Rudy’s old book collection became the town library, with Elsie as librarian.

But despite what you might imagine, Elsie is far from lonely. She runs a tavern that’s been in her family for 50 years, and has plenty of regulars from the town next door who come by every day to dine and chat.

I first read about Elsie more than 10 years ago. At the time, it wasn’t as well known a story but Elsie has since gotten a lot of coverage and become a bit of a minor celebrity. Now and then I still come across a new article, including a lovely photo essay in the New York Times and a short video on the BBC Travel site.

A Google search reveals many, many similar articles that all tell more or less the same story.

But then suddenly in 2021, there was a new wrinkle: According to the just-published 2020 U.S. Census data, Monowi now had 2 residents, doubling its population.

This came as a surprise to Elsie, who told a local newspaper, “Then someone’s been hiding from me, and there’s nowhere to live but my house.”

It turns out that nobody new had actually moved to Monowi without Elsie realizing. And the census bureau didn’t make a mistake. They intentionally changed the census data, adding one resident.

Why would they do that? Well, it turns out the census bureau sometimes moves residents around on paper in order to protect people’s privacy.

Full census data is only made available 72 years after the census takes place, in accordance with the creatively-named “72 year rule.” Until then, it is only available as aggregated data with individual identifiers removed. Still, if the population of a town is small enough, and census data for that town indicates, for example, that there is just one 90 year old woman and she lives alone, someone could conceivably figure out who that individual is.

So the census bureau sometimes moves people around to create noise in the data that makes that sort of identification a little bit harder…(More)”.

Defending democracy: The threat to the public sphere from social media


Book Review by Mark Hannam: “Habermas is a blockhead. It is simply impossible to tell what kind of damage he is still going to cause in the future”, wrote Karl Popper in 1969. The following year he added: “Most of what he says seems to me trivial; the rest seems to me mistaken”. Five decades later these Popperian conjectures have been roundly refuted. Now in his mid-nineties, Jürgen Habermas is one of the pre-eminent philosophers and public intellectuals of our time. In Germany his generation enjoyed the mercy of being born too late. In 2004, in a speech given on receipt of the Kyoto prize in arts and philosophy, he observed that “we did not have to answer for choosing the wrong side and for political errors and their dire consequences”. He came to maturity in a society that he judged complacent and insufficiently distanced from its recent past. This experience sets the context for his academic work and political interventions.

Polity has recently published two new books by Habermas, both translated by Ciaran Cronin, providing English readers access to the latest iterations of his distinctive themes and methods. He defends a capacious concept of human reason, a collaborative learning process that operates through discussions in which participants appeal only to the force of the better argument. Different kinds of discussion – about scientific facts, moral norms or aesthetic judgements – employ different standards of justification, so what counts as a valid reason depends on context, but all progress, regardless of the field, relies on our conversations following the path along which reason leads us. Habermas’s principal claim is that human reason, appropriately deployed, retains its liberating potential for the species.

His first book, The Structural Transformation of the Public Sphere (1962), traced the emergence in the eighteenth century of the public sphere. This was a functionally distinct social space, located between the privacy of civil society and the formal offices of the modern state, where citizens could engage in processes of democratic deliberation. Habermas drew attention to a range of contemporary phenomena, including the organization of opinion by political parties and the development of mass media funded by advertising, that have disrupted the possibility of widespread, well-informed political debate. Modern democracy, he argued, was increasingly characterized by the technocratic organization of interests, rather than by the open discussion of principles and values…(More)”.

Digitalisation and citizen engagement: comparing participatory budgeting in Rome and Barcelona


Book chapter by Giorgia Mattei, Valentina Santolamazza and Martina Manzo: “The digitalisation of participatory budgeting (PB) is an increasing phenomenon in that digital tools could help achieve greater citizen engagement. However, comparing two similar cases – i.e. Rome and Barcelona – some differences appear during the integration of digital tools into the PB processes. The present study describes how digital tools have positively influenced PB throughout different phases, making communication more transparent, involving a wider audience, empowering people and, consequently, making citizens’ engagement more effective. Nevertheless, the research dwells on the different elements adopted to overcome the digitalisation limits and shows various approaches and results…(More)”.

Could AI Speak on Behalf of Future Humans?


Article by Konstantin Scheuermann & Angela Aristidou : “An enduring societal challenge the world over is a “perspective deficit” in collective decision-making. Whether within a single business, at the local community level, or the international level, some perspectives are not (adequately) heard and may not receive fair and inclusive representation during collective decision-making discussions and procedures. Most notably, future generations of humans and aspects of the natural environment may be deeply affected by present-day collective decisions. Yet, they are often “voiceless” as they cannot advocate for their interests.

Today, as we witness the rapid integration of artificial intelligence (AI) systems into the everyday fabric of our societies, we recognize the potential in some AI systems to surface and/or amplify the perspectives of these previously voiceless stakeholders. Some classes of AI systems, notably Generative AI (e.g., ChatGPT, Llama, Gemini), are capable of acting as the proxy of the previously unheard by generating multi-modal outputs (audio, video, and text).

We refer to these outputs collectively here as “AI Voice,” signifying that the previously unheard in decision-making scenarios gain opportunities to express their interests—in other words, voice—through the human-friendly outputs of these AI systems. AI Voice, however, cannot realize its promise without first challenging how voice is given and withheld in our collective decision-making processes and how the new technology may and does unsettle the status quo. There is also an important distinction between the “right to voice” and the “right to decide” when considering the roles AI Voice may assume—ranging from a passive facilitator to an active collaborator. This is one highly promising and feasible possibility for how to leverage AI to create a more equitable collective future, but to do so responsibly will require careful strategy and much further conversation…(More)”.