Digital surveillance capitalism and cities: data, democracy and activism


Paper by Ashish Makanadar: “The rapid convergence of urbanization and digital technologies is fundamentally reshaping city governance through data-driven systems. This transformation, however, is largely controlled by surveillance capitalist entities, raising profound concerns for democratic values and citizen rights. As private interests extract behavioral data from public spaces without adequate oversight, the principles of transparency and civic participation are increasingly threatened. This erosion of data sovereignty represents a critical juncture in urban development, demanding urgent interdisciplinary attention. This comment proposes a paradigm shift in urban data governance, advocating for the reclamation of data sovereignty to prioritize community interests over corporate profit motives. The paper explores socio-technical pathways to achieve this goal, focusing on grassroots approaches that assert ‘data dignity’ through privacy-enhancing technologies and digital anonymity tools. It argues for the creation of distributed digital commons as viable alternatives to proprietary data silos, thereby democratizing access to and control over urban data. The discussion extends to long-term strategies, examining the potential of blockchain technologies and decentralized autonomous organizations in enabling self-sovereign data economies. These emerging models offer a vision of ‘crypto-cities’ liberated from extractive data practices, fostering environments where residents retain autonomy over their digital footprints. By critically evaluating these approaches, the paper aims to catalyze a reimagining of smart city technologies aligned with principles of equity, shared prosperity, and citizen empowerment. This realignment is essential for preserving democratic values in an increasingly digitized urban landscape…(More)”.

Rethinking the Measurement of Resilience for
Food and Nutrition Security


Paper by John M. Ulimwengu: “This paper presents a novel framework for assessing resilience in food systems, focusing on three dynamic metrics: return time, magnitude of deviation, and recovery rate. Traditional resilience measures have often relied on static and composite indicators, creating gaps in understanding the complex responses of food systems to shocks. This framework addresses these gaps, providing a more nuanced assessment of resilience in agrifood sectors. It highlights how integrating dynamic metrics enables policymakers to design tailored, sector-specific interventions that enhance resilience. Recognizing the data intensity required for these metrics, the paper indicates how emerging satellite imagery and advancements in artificial intelligence (AI) can make data collection both high-frequency and location-specific, at a fraction of the cost of traditional methods. These technologies facilitate a scalable approach to resilience measurement, enhancing the accuracy, timeliness, and accessibility of resilience data. The paper concludes with recommendations for refining resilience tools and adapting policy frameworks to better respond to the increasing challenges faced by food systems across the world…(More)”.

Setting the Standard: Statistical Agencies’ Unique Role in Building Trustworthy AI


Article by Corinna Turbes: “As our national statistical agencies grapple with new challenges posed by artificial intelligence (AI), many agencies face intense pressure to embrace generative AI as a way to reach new audiences and demonstrate technological relevance. However, the rush to implement generative AI applications risks undermining these agencies’ fundamental role as authoritative data sources. Statistical agencies’ foundational mission—producing and disseminating high-quality, authoritative statistical information—requires a more measured approach to AI adoption.

Statistical agencies occupy a unique and vital position in our data ecosystem, entrusted with creating the reliable statistics that form the backbone of policy decisions, economic planning, and social research. The work of these agencies demands exceptional precision, transparency, and methodological rigor. Implementation of generative AI interfaces, while technologically impressive, could inadvertently compromise the very trust and accuracy that make these agencies indispensable.

While public-facing interfaces play a valuable role in democratizing access to statistical information, statistical agencies need not—and often should not—rely on generative AI to be effective in that effort. For statistical agencies, an extractive AI approach – which retrieves and presents existing information from verified databases rather than generating new content – offers a more appropriate path forward. By pulling from verified, structured datasets and providing precise, accurate responses, extractive AI systems can maintain the high standards of accuracy required while making statistical information more accessible to users who may find traditional databases overwhelming. An extractive, rather than generative,  approach allows agencies to modernize data delivery while preserving their core mission of providing reliable, verifiable statistical information…(More)”

Revealed: bias found in AI system used to detect UK benefits fraud


Article by Robert Booth: “An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.

An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.

The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.

This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.

But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.

Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system…(More)”.

Predictability, AI, And Judicial Futurism: Why Robots Will Run The Law And Textualists Will Like It


Paper by Jack Kieffaber: “The question isn’t whether machines are going to replace judges and lawyers—they are. The question is whether that’s a good thing. If you’re a textualist, you have to answer yes. But you won’t—which means you’re not a textualist. Sorry.

Hypothetical: The year is 2030.  AI has far eclipsed the median federal jurist as a textual interpreter. A new country is founded; it’s a democratic republic that uses human legislators to write laws and programs a state-sponsored Large Language Model called “Judge.AI” to apply those laws to facts. The model makes judicial decisions as to conduct on the back end, but can also provide advisory opinions on the front end; if a citizen types in his desired action and hits “enter,” Judge.AI will tell him, ex ante, exactly what it would decide ex post if the citizen were to perform the action and be prosecuted. The primary result is perfect predictability; secondary results include the abolition of case law, the death of common law, and the replacement of all judges—indeed, all lawyers—by a single machine. Don’t fight the hypothetical, assume it works. This article poses the question:  Is that a utopia or a dystopia?

If you answer dystopia, you cannot be a textualist. Part I of this article establishes why:  Because predictability is textualism’s only lodestar, and Judge.AI is substantially more predictable than any regime operating today. Part II-A dispatches rebuttals premised on positive nuances of the American system; such rebuttals forget that my hypothetical presumes a new nation and take for granted how much of our nation’s founding was premised on mitigating exactly the kinds of human error that Judge.AI would eliminate. And Part II-B dispatches normative rebuttals, which ultimately amount to moral arguments about objective good—which are none of the textualist’s business. 

When the dust clears, you have only two choices: You’re a moralist, or you’re a formalist. If you’re the former, you’ll need a complete account of the objective good—which has evaded man for his entire existence. If you’re the latter, you should relish the fast-approaching day when all laws and all lawyers are usurped by a tin box.  But you’re going to say you’re something in between. And you’re not…(More)”

The Next Phase of the Data Economy: Economic & Technological Perspectives


Paper by Jad Esber et al: The data economy is poised to evolve toward a model centered on individual agency and control, moving us toward a world where data is more liquid across platforms and applications. In this future, products will either utilize existing personal data stores or create them when they don’t yet exist, empowering individuals to fully leverage their own data for various use cases.

The analysis begins by establishing a foundation for understanding data as an economic good and the dynamics of data markets. The article then investigates the concept of personal data stores, analyzing the historical challenges that have limited their widespread adoption. Building on this foundation, the article then considers how recent shifts in regulation, technology, consumer behavior, and market forces are converging to create new opportunities for a user-centric data economy. The article concludes by discussing potential frameworks for value creation and capture within this evolving paradigm, summarizing key insights and potential future directions for research, development, and policy.

We hope this article can help shape the thinking of scholars, policymakers, investors, and entrepreneurs, as new data ownership and privacy technologies emerge, and regulatory bodies around the world mandate open flows of data and new terms of service intended to empower users as well as small-to-medium–sized businesses…(More)”.

The Emergence of National Data Initiatives: Comparing proposals and initiatives in the United Kingdom, Germany, and the United States


Article by Stefaan Verhulst and Roshni Singh: “Governments are increasingly recognizing data as a pivotal asset for driving economic growth, enhancing public service delivery, and fostering research and innovation. This recognition has intensified as policymakers acknowledge that data serves as the foundational element of artificial intelligence (AI) and that advancing AI sovereignty necessitates a robust data ecosystem. However, substantial portions of generated data remain inaccessible or underutilized. In response, several nations are initiating or exploring the launch of comprehensive national data strategies designed to consolidate, manage, and utilize data more effectively and at scale. As these initiatives evolve, discernible patterns in their objectives, governance structures, data-sharing mechanisms, and stakeholder engagement frameworks reveal both shared principles and country-specific approaches.

This blog seeks to start some initial research on the emergence of national data initiatives by examining three national data initiatives and exploring their strategic orientations and broader implications. They include:

The British state is blind


The Economist: “Britiain is a bit bigger than it thought. In 2023 net migration stood at 906,000 people, rather more than the 740,000 previously estimated, according to the Office for National Statistics. It is equivalent to discovering an extra Slough. New numbers for 2022 also arrived. At first the ONS thought net migration stood at 606,000. Now it reckons the figure was 872,000, a difference roughly the size of Stoke-on-Trent, a small English city.

If statistics enable the state to see, then the British government is increasingly short-sighted. Fundamental questions, such as how many people arrive each year, are now tricky to answer. How many people are in work? The answer is fuzzy. Just how big is the backlog of court cases? The Ministry of Justice will not say, because it does not know. Britain is a blind state.

This causes all sorts of problems. The Labour Force Survey, once a gold standard of data collection, now struggles to provide basic figures. At one point the Resolution Foundation, an economic think-tank, reckoned the ONS had underestimated the number of workers by almost 1m since 2019. Even after the ONS rejigged its tally on December 3rd, the discrepancy is still perhaps 500,000, Resolution reckons. Things are so bad that Andrew Bailey, the governor of the Bank of England, makes jokes about the inaccuracy of Britain’s job-market stats in after-dinner speeches—akin to a pilot bursting out of the cockpit mid-flight and asking to borrow a compass, with a chuckle.

Sometimes the sums in question are vast. When the Department for Work and Pensions put out a new survey on household income in the spring, it was missing about £40bn ($51bn) of benefit income, roughly 1.5% of gdp or 13% of all welfare spending. This makes things like calculating the rate of child poverty much harder. Labour mps want this line to go down. Yet it has little idea where the line is to begin with.

Even small numbers are hard to count. Britain has a backlog of court cases. How big no one quite knows: the Ministry of Justice has not published any data on it since March. In the summer, concerned about reliability, it held back the numbers (which means the numbers it did publish are probably wrong, says the Institute for Government, another think-tank). And there is no way of tracking someone from charge to court to prison to probation. Justice is meant to be blind, but not to her own conduct…(More)”.

Impact Inversion


Blog by Victor Zhenyi Wang: “The very first project I worked on when I transitioned from commercial data science to development was during the nadir between South Africa’s first two COVID waves. A large international foundation was interested in working with the South African government and a technology non-profit to build an early warning system for COVID. The non-profit operated a WhatsApp based health messaging service that served about 2 million people in South Africa. The platform had run a COVID symptoms questionnaire which the foundation hoped could help the government predict surges in cases.

This kind of data-based “nowcasting” proved a useful tool in a number of other places e.g. some cities in the US. Yet in the context of South Africa, where the National Department of Health was mired in serious capacity constraints, government stakeholders were bearish about the usefulness of such a tool. Nonetheless, since the foundation was interested in funding this project, we went ahead with it anyway. The result was that we pitched this “early warning system” a handful of times to polite public health officials but it was otherwise never used. A classic case of development practitioners rendering problems technical and generating non-solutions that primarily serve the strategic objectives of the funders.

The technology non-profit did however express interest in a different kind of service — what about a language model that helps users answer questions about COVID? The non-profit’s WhatsApp messaging service is menu-based and they thought that a natural language interface could provide a better experience for users by letting them engage with health content on their own terms. Since we had ample funding from the foundation for the early warning system, we decided to pursue the chatbot project.

The project has now spanned to multiple other services run by the same non-profit, including the largest digital health service in South Africa. The project has won multiple grants and partnerships, including with Google, and has spun out into its own open source library. In many ways, in terms of sheer number of lives affected, this is the most impactful project I have had the privilege of supporting in my career in development, and I am deeply grateful to have been part of the team involved bringing it into existence.

Yet the truth is, the “impact” of this class of interventions remain unclear. Even though a large randomized controlled trial was done to assess the impact of the WhatsApp service, such an evaluation only captures the performance of the service on outcome variables determined by the non-profit, not on whether these outcomes are appropriate. It certainly does not tell us whether the service was the best means available to achieve the ultimate goal of improving the lives of those in communities underserved by health services.

This project, and many others that I have worked on as a data scientist in development, uses an implicit framework for impact which I describe as the design-to-impact pipeline. A technology is designed and developed, then its impact is assessed on the world. There is a strong emphasis to reform, to improve the design, development, and deployment of development technologies. Development practitioners have a broad range of techniques to make sure that the process of creation is ethical and responsible — in some sense, legitimate. With the broad adoption of data-based methods of program evaluation, e.g. randomized control trials, we might even make knowledge claims that an intervention truly ought to bring certain benefits to communities in which the intervention is placed. This view imagines that technologies, once this process is completed, is simply unleashed onto the world, and its impact is simply what was assessed ex ante. An industry of monitoring and evaluation surrounds its subsequent deployment; the relative success of interventions depends on the performance of benchmark indicators…(More)”.

Data for Better Governance: Building Government Analytics Ecosystems in Latin America and the Caribbean


Report by the Worldbank: “Governments in Latin America and the Caribbean face significant development challenges, including insufficient economic growth, inflation, and institutional weaknesses. Overcoming these issues requires identifying systemic obstacles through data-driven diagnostics and equipping public officials with the skills to implement effective solutions.

Although public administrations in the region often have access to valuable data, they frequently fall short in analyzing it to inform decisions. However, the impact is big. Inefficiencies in procurement, misdirected transfers, and poorly managed human resources result in an estimated waste of 4% of GDP, equivalent to 17% of all public spending. 

The report “Data for Better Governance: Building Government Analytical Ecosystems in Latin America and the Caribbean” outlines a roadmap for developing government analytics, focusing on key enablers such as data infrastructure and analytical capacity, and offers actionable strategies for improvement…(More)”.