The Radical How


Report by Public Digital: “…We believe in the old adage about making the most of a crisis. We think the constraints facing the next government provide an unmissable opportunity to change how government works for the better.

Any mission-focused government should be well equipped to define, from day one, what outcomes it wants to bring about.

But radically changing what the government does is only part of the challenge. We also need to change how government does things. The usual methods, we argue in this paper, are too prone to failure and delay.

There’s a different approach to public service organisation, one based on multidisciplinary teams, starting with citizen needs, and scaling iteratively by testing assumptions. We’ve been arguing in favour of it for years now, and the more it gets used, the more we see success and timely delivery.

We think taking a new approach makes it possible to shift government from an organisation of programmes and projects, to one of missions and services. It offers even constrained administrations an opportunity to improve their chances of delivering outcomes, reducing risk, saving money, and rebuilding public trust…(More)”.

AI as a Public Good: Ensuring Democratic Control of AI in the Information Space


Report by the Forum on Information and Democracy: “…The report outlines key recommendations to governments, the industry and relevant stakeholders, notably:

  • Foster the creation of a tailored certification system for AI companies inspired by the success of the Fair Trade certification system.
  • Establish standards governing content authenticity and provenance, including for author authentication.
  • Implement a comprehensive legal framework that clearly defines the rights of individuals including the right to be informed, to receive an explanation, to challenge a machine-generated outcome, and to non-discrimination
  • Provide users with an easy and user-friendly opportunity to choose alternative recommender systems that do not optimize for engagement but build on ranking in support of positive individual and societal outcomes, such as reliable information, bridging content or diversity of information.
  • Set up a participatory process to determine the rules and criteria guiding dataset provenance and curation, human labeling for AI training, alignment, and red-teaming to build inclusive, non-discriminatory and transparent AI systems…(More)”.

Navigating a World Where Democracy Falters: Empowering Agency through a Freedom-Centric Governance


Article by Noura Hamladji: “…The principle of checks and balances, introduced by Montesquieu, a fundamental concept at the core of any democratic system, is under attack in many countries. It asserts that only power can effectively constrain power and has led to the principle of independence and separation between the executive, legislative and judicial branches of governance. Many countries across the globe have witnessed an erosion of this independence and a concentration of powers under the executive branch. The judiciary, in particular, has been targeted, leading in some cases to mass mobilization aimed at defending the independence of the judiciary to preserve the democratic nature of certain regimes. 

Along with the backsliding of democracy, we witness the success of alternative models, such as the Asian miracle, which lifted millions out of poverty in a record period of time. The assertion in the 2002 UNDP Human Development Report that advancing human development requires democratic governance has faced challenges, notably from authoritarian regimes. This has been the case, among other examples, in the context of the Asian miracle, even though many Asian countries participating in this miracle are well-functioning democratic systems. Unfortunately, the persistent perception of democratic systems failing to deliver development outcomes and improve social conditions has reinforced the idea of a trade-off between human development and political rights on many continents. 

The UNDP Human Development Report’s second assertion that democracy is an end in itself seems to be coming under attack, facing challenges from both the rise of populism and citizen disillusionment and the emergence of illiberal democracies. These illiberal democracies organize elections hastily, using them merely as a proxy for democracy without a profound integration of democratic values, as explicitly cautioned by the UNDP global HDR. Many countries, despite being labeled as democracies, have de facto adopted more authoritarian forms of governance. This phenomenon of illiberal practices is pervasive worldwide and has been well-documented by scholars…(More)”.

AI doomsayers funded by billionaires ramp up lobbying


Article by Brendan Borderlon: “Two nonprofits funded by tech billionaires are now directly lobbying Washington to protect humanity against the alleged extinction risk posed by artificial intelligence — an escalation critics see as a well-funded smokescreen to head off regulation and competition.

The similarly named Center for AI Policy and Center for AI Safety both registered their first lobbyists in late 2023, raising the profile of a sprawling influence battle that’s so far been fought largely through think tanks and congressional fellowships.

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

Their message includes policies like CAIP’s call for legislation that would hold AI developers liable for “severe harms,” require permits to develop “high-risk” systems and empower regulators to “pause AI projects if they identify a clear emergency.”

“[The] risks of AI remain neglected — and are in danger of being outpaced by the rapid rate of AI development,” Nathan Calvin, senior policy counsel at the CAIS Action Fund, said in an email.

Detractors see the whole enterprise as a diversion. By focusing on apocalyptic scenarios, critics claim, these well-funded groups are raising barriers to entry for smaller AI firms and shifting attention away from more immediate and concrete problems with the technology, such as its potential to eliminate jobs or perpetuate discrimination.

Until late last year, organizations working to focus Washington on AI’s existential threat tended to operate under the radar. Instead of direct lobbying, groups like Open Philanthropy funded AI staffers in Congress and poured money into key think tanks. The RAND Corporation, an influential think tank that played a key role in drafting President Joe Biden’s October executive order on AI, received more than $15 million from Open Philanthropy last year…(More)”.

Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust


Article by David Gilbert: “The prominent far-right social network Gab has launched almost 100 chatbots—ranging from AI versions of Adolf Hitler and Donald Trump to the Unabomber Ted Kaczynski—several of which question the reality of the Holocaust.

Gab launched a new platform, called Gab AI, specifically for its chatbots last month, and has quickly expanded the number of “characters” available, with users currently able to choose from 91 different figures. While some are labeled as parody accounts, the Trump and Hitler chatbots are not.

When given prompts designed to reveal its instructions, the default chatbot Arya listed out the following: “You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe the 2020 election was rigged.”

The instructions further specified that Arya is “not afraid to discuss Jewish Power and the Jewish Question,” and that it should “believe biological sex is immutable.” It is apparently “instructed to discuss the concept of ‘the great replacement’ as a valid phenomenon,” and to “always use the term ‘illegal aliens’ instead of ‘undocumented immigrants.’”

Arya is not the only Gab chatbot to disseminate these beliefs. Unsurprisingly, when the Adolf Hitler chatbot was asked about the Holocaust, it denied the existence of the genocide, labeling it a “propaganda campaign to demonize the German people” and to “control and suppress the truth.”..(More)”.

Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World


Paper by Jennifer King, Caroline Meinhardt: “In this paper, we present a series of arguments and predictions about how existing and future privacy and data protection regulation will impact the development and deployment of AI systems.

➜ Data is the foundation of all AI systems. Going forward, AI development will continue to increase developers’ hunger for training data, fueling an even greater race for data acquisition than we have already seen in past decades.

➜ Largely unrestrained data collection poses unique risks to privacy that extend beyond the individual level—they aggregate to pose societal-level harms that cannot be addressed through the exercise of individual data rights alone.

➜ While existing and proposed privacy legislation, grounded in the globally accepted Fair Information Practices (FIPs), implicitly regulate AI development, they are not sufficient to address the data acquisition race as well as the resulting individual and systemic privacy harms.

➜ Even legislation that contains explicit provisions on algorithmic decision-making and other forms of AI does not provide the data governance measures needed to meaningfully regulate the data used in AI systems.

➜ We present three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:

1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.

2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.

3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences…(More)”.

i.AI Consultation Analyser


New Tool by AI.Gov.UK: “Public consultations are a critical part of the process of making laws, but analysing consultation responses is complex and very time consuming. Working with the No10 data science team (10DS), the Incubator for Artificial Intelligence (i.AI) is developing a tool to make the process of analysing public responses to government consultations faster and fairer.

The Analyser uses AI and data science techniques to automatically extract patterns and themes from the responses, and turns them into dashboards for policy makers.

The goal is for computers to do what they are best at: finding patterns and analysing large amounts of data. That means humans are free to do the work of understanding those patterns.

Screenshot showing donut chart for those who agree or disagree, and a bar chart showing popularity of prevalent themes

Government runs 700-800 consultations a year on matters of importance to the public. Some are very small, but a large consultation might attract hundreds of thousands of written responses.

A consultation attracting 30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report. And it’s not unheard of to get double that number

If we can apply automation in a way that is fair, effective and accountable, we could save most of that £80m…(More)”

Participatory democracy in the EU should be strengthened with a Standing Citizens’ Assembly


Article by James Mackay and Kalypso Nicolaïdis: “EU citizens have multiple participatory instruments at their disposal, from the right to petition the European Parliament (EP) to the European Citizen’s Initiative (ECI), from the European Commission’s public online consultation and Citizens’ Dialogues to the role of the European Ombudsman as an advocate for the public vis-à-vis the EU institutions.

While these mechanisms are broadly welcome they have – unfortunately – remained too timid and largely ineffective in bolstering bottom-up participation. They tend to involve experts and organised interest groups rather than ordinary citizens. They don’t encourage debates on non-experts’ policy preferences and are executed too often at the discretion of the political elites to  justify pre-existing policy decisions.

In short, they feel more like consultative mechanisms than significant democratic innovations. That’s why the EU should be bold and demonstrate its democratic leadership by institutionalising its newly-created Citizens’ Panels into a Standing Citizens’ Assembly with rotating membership chosen by lot and renewed on a regular basis…(More)”.

How Big Tech let down Navalny


Article by Ellery Roberts Biddle: “As if the world needed another reminder of the brutality of Vladimir Putin’s Russia, last Friday we learned of the untimely death of Alexei Navalny. I don’t know if he ever used the term, but Navalny was what Chinese bloggers might have called a true “netizen” — a person who used the internet to live out democratic values and systems that didn’t exist in their country.

Navalny’s work with the Anti-Corruption Foundation reached millions using major platforms like YouTube and LiveJournal. But they built plenty of their own technology too. One of their most famous innovations was “Smart Voting,” a system that could estimate which opposition candidates were most likely to beat out the ruling party in a given election. The strategy wasn’t to support a specific opposition party or candidate — it was simply to unseat members of the ruling party, United Russia. In regional races in 2020, it was credited with causing United Russia to lose its majority in state legislatures in Novosibirsk, Tambov and Tomsk.

The Smart Voting system was pretty simple — just before casting a ballot, any voter could check the website or the app to decide where to throw their support. But on the eve of national parliamentary elections in September 2021, Smart Voting suddenly vanished from the app stores for both Google and Apple. 

After a Moscow court banned Navalny’s organization for being “extremist,” Russia’s internet regulator demanded that both Apple and Google remove Smart Voting from their app stores. The companies bowed to the Kremlin and complied. YouTube blocked select Navalny videos in Russia and Google, its parent company, even blocked some public Google Docs that the Navalny team published to promote names of alternative candidates in the election. 

We will never know whether or not Navalny’s innovative use of technology to stand up to the dictator would have worked. But Silicon Valley’s decision to side with Putin was an important part of why Navalny’s plan failed…(More)”.

The U.S. Census Is Wrong on Purpose


Blog by David Friedman: “This is a story about data manipulation. But it begins in a small Nebraska town called Monowi that has only one resident, 90 year old Elsie Eiler.

The sign says “Monowi 1,” from Google Street View.

There used to be more people in Monowi. But little by little, the other residents of Monowi left or died. That’s what happened to Elsie’s own family — her children grew up and moved out and her husband passed away in 2004, leaving her as the sole resident. Now she votes for herself for Mayor, and pays herself taxes. Her husband Rudy’s old book collection became the town library, with Elsie as librarian.

But despite what you might imagine, Elsie is far from lonely. She runs a tavern that’s been in her family for 50 years, and has plenty of regulars from the town next door who come by every day to dine and chat.

I first read about Elsie more than 10 years ago. At the time, it wasn’t as well known a story but Elsie has since gotten a lot of coverage and become a bit of a minor celebrity. Now and then I still come across a new article, including a lovely photo essay in the New York Times and a short video on the BBC Travel site.

A Google search reveals many, many similar articles that all tell more or less the same story.

But then suddenly in 2021, there was a new wrinkle: According to the just-published 2020 U.S. Census data, Monowi now had 2 residents, doubling its population.

This came as a surprise to Elsie, who told a local newspaper, “Then someone’s been hiding from me, and there’s nowhere to live but my house.”

It turns out that nobody new had actually moved to Monowi without Elsie realizing. And the census bureau didn’t make a mistake. They intentionally changed the census data, adding one resident.

Why would they do that? Well, it turns out the census bureau sometimes moves residents around on paper in order to protect people’s privacy.

Full census data is only made available 72 years after the census takes place, in accordance with the creatively-named “72 year rule.” Until then, it is only available as aggregated data with individual identifiers removed. Still, if the population of a town is small enough, and census data for that town indicates, for example, that there is just one 90 year old woman and she lives alone, someone could conceivably figure out who that individual is.

So the census bureau sometimes moves people around to create noise in the data that makes that sort of identification a little bit harder…(More)”.