How the UK could monetise ‘citizen data’ and turn it into a national asset


Article by Ashley Braganza and S. Asieh H. Tabaghdehi: “Across all sectors, UK citizens produce vast amounts of data. This data is increasingly needed to train AI systems. But it is also of enormous value to private companies, which use it to target adverts to consumers based on their behaviour or to personalise content to keep people on their site.

Yet the economic and social value of this citizen-generated data is rarely returned to the public, highlighting the need for more equitable and transparent models of data stewardship.

AI companies have demonstrated that datasets hold immense economic, social and strategic value. And the UK’s AI Opportunities Action Plan notes that access to new and high-quality datasets can confer a competitive edge in developing AI models. This in turn unlocks the potential for innovative products and services.

However, there’s a catch. Most citizens have signed over their data to companies by accepting standard terms and conditions. Once citizen data is “owned” by companies, this leaves others unable to access it or forced to pay to do so.

Commercial approaches to data tend to prioritise short-term profit, often at the expense of the public interest. The debate over the use of artistic and creative materials to train AI models without recompense to the creator exemplifies the broader trade-off between commercial use of data and the public interest.

Countries around the world are recognising the strategic value of public data. The UK government could lead in making public data into a strategic asset. What this might mean in practice is the government owning citizen data and monetising this through sale or licensing agreements with commercial companies.

In our evidence, we proposed a UK sovereign data fund to manage the monetisation of public datasets curated within the NDL. This fund could invest directly in UK companies, fund scale-ups and create joint ventures with local and international partners.

The fund would have powers to license anonymised, ethically governed data to companies for commercial use. It would also be in a position to fast-track projects that benefit the UK or have been deemed to be national priorities. (These priorities are drones and other autonomous technologies as well as engineering biology, space and AI in healthcare.)…(More)”.

Computer Science and the Law


Article by Steven M. Bellovin: “There were three U.S. technical/legal developments occurring in approximately 1993 that had a profound effect on the technology industry and on many technologists. More such developments are occurring with increasing frequency.

The three developments were, in fact, technically unrelated. One was a bill before the U.S. Congress for a standardized wiretap interface in phone switches, a concept that spread around the world under the generic name of “lawful intercept.” The second was an update to the copyright statute to adapt to the digital age. While there were some useful changes—caching proxies and ISPs transmitting copyrighted material were no longer to be held liable for making illegal copies of protected content—it also provided an easy way for careless or unscrupulous actors—including bots—to request takedown of perfectly legal material. The third was the infamous Clipper chip, an encryption device that provided a backdoor for the U.S.—and only the U.S.—government.

All three of these developments could be and were debated on purely legal or policy grounds. But there were also technical issues. Thus, one could argue on legal grounds that the Clipper chip granted the government unprecedented powers, powers arguably in violation of the Fourth Amendment to the U.S. Constitution. That, of course, is a U.S. issue—but technologists, including me, pointed out the technical risks of deploying a complex cryptographic protocol, anywhere in the world (and many other countries have since expressed similar desires). Sure enough, Matt Blaze showed how to abuse the Clipper chip to let it do backdoor-free encryption, and at least two other mechanisms for adding backdoors to encryption protocols were shown to have flaws that allowed malefactors to read data that others had encrypted.

These posed a problem: debating some issues intelligently required not just a knowledge of law or of technology, but of both. That is, some problems cannot be discussed purely on technical grounds or purely on legal grounds; the crux of the matter lies in the intersection.

Consider, for example, the difference between content and metadata in a communication. Metadata alone is extremely powerful; indeed, Michael Hayden, former director of both the CIA and the NSA, once said, “We kill people based on metadata.” The combination of content and metadata is of course even more powerful. However, under U.S. law (and the legal reasoning is complex and controversial), the content of a phone call is much more strongly protected than the metadata: who called whom, when, and for how long they spoke. But how does this doctrine apply to the Internet, a network that provides far more powerful abilities to the endpoints in a conversation? (Metadata analysis is not an Internet-specific phenomenon. The militaries of the world have likely been using it for more than a century.) You cannot begin to answer that question without knowing not just how the Internet actually works, but also the legal reasoning behind the difference. It took more than 100 pages for some colleagues and I, three computer scientists and a former Federal prosecutor, to show how the line between content and metadata can be drawn in some cases (and that the Department of Justice’s manuals and some Federal judges got the line wrong), but that in other cases, there is no possible line1 

Newer technologies pose the same sorts of risks…(More)”.

Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It


Article by Tiago C. Peixoto: “A few weeks ago, I reached out to a handful of seasoned digital services practitioners, NGOs, and philanthropies with a simple question: Where are the compelling generative AI (GenAI) use cases in public-sector workflows? I wasn’t looking for better search or smarter chatbots. I wanted examples of automation of real public workflows – something genuinely interesting and working. The responses, though numerous, were underwhelming.

That question has gained importance amid a growing number of reports forecasting AI’s transformative impact on government. The Alan Turing Institute, for instance, published a rigorous study estimating the potential of AI to help automate over 140 million government transactions in the UK. The Tony Blair Institute also weighed in, suggesting that a substantive portion of public-sector work could be automated. While the report helped bring welcome attention to the issue, its use of GPT-4 to assess task automatability has sparked a healthy discussion about how best to evaluate feasibility. Like other studies in this area, both reports highlight potential – but stop short of demonstrating real service automation.

Without testing technologies in real service environments – where workflows, incentives, and institutional constraints shape outcomes – and grounding each pilot in clear efficiency or well-being metrics, estimates risk becoming abstractions that underestimate feasibility.

This pattern aligns with what Arvind Narayanan and Sayash Kapoor argue in “AI as Normal Technology:” the impact of AI is realized only when methods translate into applications and diffuse through real-world systems. My own review, admittedly non-representative, confirms their call for more empirical work on the innovation-diffusion lag.

In the public sector, the gap between capability and impact is not only wide but also structural…(More)”

We still don’t know how much energy AI consumes


Article by Sasha Luccioni: “…The AI Energy Score project, a collaboration between Salesforce, Hugging FaceAI developer Cohere and Carnegie Mellon University, is an attempt to shed more light on the issue by developing a standardised approach. The code is open and available for anyone to access and contribute to. The goal is to encourage the AI community to test as many models as possible.

By examining 10 popular tasks (such as text generation or audio transcription) on open-source AI models, it is possible to isolate the amount of energy consumed by the computer hardware that runs them. These are assigned scores ranging between one and five stars based on their relative efficiency. Between the most and least efficient AI models in our sample, we found a 62,000-fold difference in the power required. 

Since the project was launched in February a new tool compares the energy use of chatbot queries with everyday activities like phone charging or driving as a way to help users understand the environmental impacts of the tech they use daily.

The tech sector is aware that AI emissions put its climate commitments in danger. Both Microsoft and Google no longer seem to be meeting their net zero targets. So far, however, no Big Tech company has agreed to use the methodology to test its own AI models.

It is possible that AI models will one day help in the fight against climate change. AI systems pioneered by companies like DeepMind are already designing next-generation solar panels and battery materials, optimising power grid distribution and reducing the carbon intensity of cement production.

Tech companies are moving towards cleaner energy sources too. Microsoft is investing in the Three Mile Island nuclear power plant and Alphabet is engaging with more experimental approaches such as small modular nuclear reactors. In 2024, the technology sector contributed to 92 per cent of new clean energy purchases in the US. 

But greater clarity is needed. OpenAI, Anthropic and other tech companies should start disclosing the energy consumption of their models. If they resist, then we need legislation that would make such disclosures mandatory.

As more users interact with AI systems, they should be given the tools to understand how much energy each request consumes. Knowing this might make them more careful about using AI for superfluous tasks like looking up a nation’s capital. Increased transparency would also be an incentive for companies developing AI-powered services to select smaller, more sustainable models that meet their specific needs, rather than defaulting to the largest, most energy-intensive options…(More)”.

Gen Z’s new side hustle: selling data


Article by Erica Pandey: “Many young people are more willing than their parents to share personal data, giving companies deeper insight into their lives.

Why it matters: Selling data is becoming the new selling plasma.

Case in point: Generation Lab, a youth polling company, is launching a new product, Verb.AI, today — betting that buying this data is the future of polling.

  • “We think corporations have extracted user data without fairly compensating people for their own data,” says Cyrus Beschloss, CEO of Generation Lab. “We think users should know exactly what data they’re giving us and should feel good about what they’re receiving in return.”

How it works: Generation Lab offers people cash — $50 or more per month, depending on use and other factors — to download a tracker onto their phones.

  • The product takes about 90 seconds to download, and once it’s on your phone, it tracks things like what you browse, what you buy, which streaming apps you use — all anonymously. There are also things it doesn’t track, like activity on your bank account.
  • Verb then uses that data to create a digital twin of you that lives in a central database and knows your preferences…(More)”.

What Happens When AI-Generated Lies Are More Compelling than the Truth?


Essay by Nicholas Carr: “…In George Orwell’s 1984, the functionaries in Big Brother’s Ministry of Truth spend their days rewriting historical records, discarding inconvenient old facts and making up new ones. When the truth gets hazy, tyrants get to define what’s true. The irony here is sharp. Artificial intelligence, perhaps humanity’s greatest monument to logical thinking, may trigger a revolution in perception that overthrows the shared values of reason and rationality we inherited from the Enlightenment.

In 1957, a Russian scientist-turned-folklorist named Yuri Mirolyubov published a translation of an ancient manuscript—a thousand years old, he estimated—in a Russian-language newspaper in San Francisco. Mirolyubov’s Book of Veles told stirring stories of the god Veles, a prominent deity in pre-Christian Slavic mythology. A shapeshifter, magician, and trickster, Veles would visit the mortal world in the form of a bear, sowing mischief wherever he went.

Mirolyubov claimed that the manuscript, written on thin wooden boards bound with leather straps, had been discovered by a Russian soldier in a bombed-out Ukrainian castle in 1919. The soldier had photographed the boards and given the pictures to Mirolyubov, who translated the work into modern Russian. Mirolyubov illustrated his published translation with one of the photographs, though the original boards, he said, had disappeared mysteriously during the Second World War. Though historians and linguists soon dismissed the folklorist’s Book of Veles as a hoax, its renown spread. Today, it’s revered as a holy text by certain neo-pagan and Slavic nationalist cults.

Mythmaking, more than truth seeking, is what seems likely to define the future of media and of the public square.

Myths are works of art. They provide a way of understanding the world that appeals not to reason but to emotion, not to the conscious mind but to the subconscious one. What is most pleasing to our sensibilities—what is most beautiful to us—is what feels most genuine, most worthy of belief. History and psychology both suggest that, in politics as in art, generative AI will succeed in fulfilling the highest aspiration of its creators: to make the virtual feel more authentic than the real…(More)”

Government ‘With’ The People 


Article by Nathan Gardels: “The rigid polarization that has gripped our societies and eroded trust in each other and in governing institutions feeds the appeal of authoritarian strongmen. Poised as tribunes of the people, they promise to lay down the law (rather than be constrained by it) and put the house in order not by bridging divides, but by targeting scapegoats and persecuting political adversaries who don’t conform to their ideological and cultural worldview.

The alternative to this course of illiberal democracy is the exact opposite: engaging citizens directly in governance through non-partisan platforms that encourage and enable deliberation, negotiation and compromise, to reach consensus across divides. Even as politics is tilting the other way at the national level, this approach of participation without populism is gaining traction from the bottom up.

The embryonic forms of this next step in democratic innovation, such as citizens’ assemblies or virtual platforms for bringing the public together and listening at scale, have so far been mostly advisory to the powers-that-be, with no guarantee that citizen input will have a binding impact on legislation or policy formation. That is beginning to change….

Claudia Chwalisz, who heads DemocracyNext, has spelled out the key elements of this innovative process that make it a model for others elsewhere:

  • Implementation should be considered from the start, not as an afterthought. The format of the final recommendations, the process for final approval, and the time needed to ensure this part of the process does not get neglected need to be considered in the early design stages of the assembly.
  • Dedicated time and resources for transforming recommendations into legislation are also crucial for successful implementation. Bringing citizens, politicians, and civil servants together in the final stages can help bridge the gap between recommendations and action. While it has been more typical for citizens’ assemblies to draft recommendations that they then hand onward to elected officials and civil servants, who review them and then respond to the citizens’ assembly, the Parisian model demonstrates another way.
  • Collaborative workshops where consensus amongst the triad of actors is needed adds more time to the process, but ensures that there is a high level of consensus for the final output, and reduces the time that would have been needed for officials to review and respond to the citizens’ assembly’s recommendations.
  • Formal institutional integration of citizens’ assemblies through legal measures can help ensure their recommendations are taken seriously and ensures the assembly’s continuity regardless of shifts in government. The citizens’ assembly has become a part of Paris’s democratic architecture, as have other permanent citizens’ assemblies elsewhere. While one-off assemblies typically depend on political will at a moment in time and risk becoming politicized — i.e. in being associated with the party that initially launched the first one — an institutionalized citizens’ assembly anchored in policy and political decision-making helps to set the foundation for a new institution that can endure.
  • It is also important that there is regular engagement with all political parties and stakeholders throughout the process. This helps build cross-partisan support for final recommendations, as well as more sustainable support for the enduring nature of the permanent citizens assembly.”…(More)”.

Accounting for State Capacity


Essay by Kevin Hawickhorst: “The debates over the Department of Government Efficiency have revealed, if nothing else, that the federal budget is obscure even to the political combatants ostensibly responsible for developing and overseeing it. In the executive branch, Elon Musk highlights that billions of dollars of payments are processed by the Treasury without even a memo line. Meanwhile, in Congress, Republican politicians highlight the incompleteness of the bureaucracy’s spending records, while Democrats bemoan the Trump administration’s dissimulation in ceasing to share budgetary guidance documents. The camp followers of these obscure programs are thousands of federal contractors, pursuing vague goals with indefinite timelines. As soon as the ink on a bill is dry, it seems, Congress loses sight of its initiatives until their eventual success or their all-too-frequent failure.

Contrast this with the 1930s, when the Roosevelt administration provided Congress with hundreds of pages of spending reports every ten days, outlining how tax dollars were being put to use in minute detail. The speed and thoroughness with which these reports were produced is hard to fathom, and yet the administration was actually holding its best information back. FDR’s Treasury had itemized information on hundreds of thousands of projects, down to the individual checks that were written. Incredibly, politicians had better dashboards in the era of punch cards than we have in the era of AI. The decline in government competence runs deeper than our inability to match the speed and economy of New Deal construction: even their accounting was better. What happened?

Political scientists discuss the decline in government competence in terms of “state capacity,” which describes a government’s ability to achieve the goals it pursues. Most political scientists agree that the United States not only suffers from degraded state capacity in absolute terms, but has less state capacity today than in the early twentieth century. A popular theory for this decline blames the excessive proceduralism of the U.S. government: the “cascade of rigidity” or the “procedure fetish.”

But reformers need more than complaints. To rebuild state capacity, reformers need an affirmative vision of what good procedure should look like and, in order to enact it, knowledge of how government procedure is changed. The history of government budgeting and accounting reform illustrates both. There were three major eras of reform to federal accounting in the twentieth century: New Deal reforms of the 1930s, conservative reforms of the 1940s and 1950s, and liberal reforms of the 1960s. This history tells the story of how accounting reforms first built up American state capacity and how later reforms contributed to its gradual decline. These reforms thus offer lessons on rebuilding state capacity today…(More)”.

Humanitarian aid depends on good data: what’s wrong with the way it’s collected


Article by Vicki Squire: The defunding of the US Agency for International Development (USAID), along with reductions in aid from the UK and elsewhere, raises questions about the continued collection of data that helps inform humanitarian efforts.

Humanitarian response plans rely on accurate, accessible and up-to-date data. Aid organisations use this to review needs, monitor health and famine risks, and ensure security and access for humanitarian operations.

The reliance on data – and in particular large-scale digitalised data – has intensified in the humanitarian sector over the past few decades. Major donors all proclaim a commitment to evidence-based decision making. The International Organization for Migration’s Displacement Tracking Matrix and the REACH impact initiative are two examples designed to improve operational and strategic awareness of key needs and risks.

Humanitarian data streams have already been affected by USAID cuts. For example, the Famine Early Warning Systems Network was abruptly closed, while the Demographic and Health Surveys programme was “paused”. The latter informed global health policies in areas ranging from maternal health and domestic violence to anaemia and HIV prevalence.

The loss of reliable, accessible and up-to-date data threatens monitoring capacity and early warning systems, while reducing humanitarian access and rendering security failures more likely…(More)”.

The Technopolar Paradox


Article by Ian Bremmer: “In February 2022, as Russian forces advanced on Kyiv, Ukraine’s government faced a critical vulnerability: with its Internet and communication networks under attack, its troops and leaders would soon be in the dark. Elon Musk—the de facto head of Tesla, SpaceX, X (formerly Twitter), xAI, the Boring Company, and Neuralink—stepped in. Within days, SpaceX had deployed thousands of Starlink terminals to Ukraine and activated satellite Internet service at no cost. Having kept the country online, Musk was hailed as a hero.

But the centibillionaire’s personal intervention—and Kyiv’s reliance on it—came with risks. Months later, Ukraine asked SpaceX to extend Starlink’s coverage to Russian-occupied Crimea, to enable a submarine drone strike that Kyiv wanted to carry out against Russian naval assets. Musk refused—worried, he said, that this would cause a major escalation in the war. Even the Pentagon’s entreaties on behalf of Ukraine failed to convince him. An unelected, unaccountable private citizen had unilaterally thwarted a military operation in an active war zone while exposing the fact that governments had remarkably little control over crucial decisions affecting their citizens and national security.

This was “technopolarity” in action: a technology leader not only driving stock market returns but also controlling aspects of civil society, politics, and international affairs that have been traditionally the exclusive preserve of nation-states. Over the past decade, the rise of such individuals and the firms they control has transformed the global order, which had been defined by states since the Peace of Westphalia enshrined them as the building blocks of geopolitics nearly 400 years ago. For most of this time, the structure of that order could be described as unipolar, bipolar, or multipolar, depending on how power was distributed among countries. The world, however, has since entered a “technopolar moment,” a term I used in Foreign Affairs in 2021 to describe an emerging order in which “a handful of large technology companies rival [states] for geopolitical influence.” Major tech firms have become powerful geopolitical actors, exercising a form of sovereignty over digital space and, increasingly, the physical world that potentially rivals that of states…(More)”.