Digitalizing sewage: The politics of producing, sharing, and operationalizing data from wastewater-based surveillance


Paper by Josie Wittmer, Carolyn Prouse, and Mohammed Rafi Arefin: “Expanded during the COVID-19 pandemic, Wastewater-Based Surveillance (WBS) is now heralded by scientists and policy makers alike as the future of monitoring and governing urban health. The expansion of WBS reflects larger neoliberal governance trends whereby digitalizing states increasingly rely on producing big data as a ‘best practice’ to surveil various aspects of everyday life. With a focus on three South Asian cities, our paper investigates the transnational pathways through which WBS data is produced, made known, and operationalized in ‘evidence-based’ decision-making in a time of crisis. We argue that in South Asia, wastewater surveillance data is actively produced through fragile but power-laden networks of transnational and local knowledge, funding, and practices. Using mixed qualitative methods, we found these networks produced artifacts like dashboards to communicate data to the public in ways that enabled claims to objectivity, ethical interventions, and transparency. Interrogating these representations, we demonstrate how these artifacts open up messy spaces of translation that trouble linear notions of objective data informing accountable, transparent, and evidence-based decision-making for diverse urban actors. By thinking through the production of precarious biosurveillance infrastructures, we respond to calls for more robust ethical and legal frameworks for the field and suggest that the fragility of WBS infrastructures has important implications for the long-term trajectories of urban public health governance in the global South…(More)”

Will Artificial Intelligence Replace Us or Empower Us?


Article by Peter Coy: “…But A.I. could also be designed to empower people rather than replace them, as I wrote a year ago in a newsletter about the M.I.T. Shaping the Future of Work Initiative.

Which of those A.I. futures will be realized was a big topic at the San Francisco conference, which was the annual meeting of the American Economic Association, the American Finance Association and 65 smaller groups in the Allied Social Science Associations.

Erik Brynjolfsson of Stanford was one of the busiest economists at the conference, dashing from one panel to another to talk about his hopes for a human-centric A.I. and his warnings about what he has called the “Turing Trap.”

Alan Turing, the English mathematician and World War II code breaker, proposed in 1950 to evaluate the intelligence of computers by whether they could fool someone into thinking they were human. His “imitation game” led the field in an unfortunate direction, Brynjolfsson argues — toward creating machines that behaved as much like humans as possible, instead of like human helpers.

Henry Ford didn’t set out to build a car that could mimic a person’s walk, so why should A.I. experts try to build systems that mimic a person’s mental abilities? Brynjolfsson asked at one session I attended.

Other economists have made similar points: Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University use the term “so-so technologies” for systems that replace human beings without meaningfully increasing productivity, such as self-checkout kiosks in supermarkets.

People will need a lot more education and training to take full advantage of A.I.’s immense power, so that they aren’t just elbowed aside by it. “In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital,” Brynjolfsson wrote in 2022, citing research by him and others…(More)”.

AI Is Bad News for the Global South


Article by Rachel Adams: “…AI’s adoption in developing regions is also limited by its design. AI designed in Silicon Valley on largely English-language data is not often fit for purpose outside of wealthy Western contexts. The productive use of AI requires stable internet access or smartphone technology; in sub-Saharan Africa, only 25 percent of people have reliable internet access, and it is estimated that African women are 32 percent less likely to use mobile internet than their male counterparts.

Generative AI technologies are also predominantly developed using the English language, meaning that the outputs they produce for non-Western users and contexts are oftentimes useless, inaccurate, and biased. Innovators in the global south have to put in at least twice the effort to make their AI applications work for local contexts, often by retraining models on localized datasets and through extensive trial and error practices.

Where AI is designed to generate profit and entertainment only for the already privileged, it will not be effective in addressing the conditions of poverty and in changing the lives of groups that are marginalized from the consumer markets of AI. Without a high level of saturation across major industries, and without the infrastructure in place to enable meaningful access to AI by all people, global south nations are unlikely to see major economic benefits from the technology.

As AI is adopted across industries, human labor is changing. For poorer countries, this is engendering a new race to the bottom where machines are cheaper than humans and the cheap labor that was once offshored to their lands is now being onshored back to wealthy nations. The people most impacted are those with lower education levels and fewer skills, whose jobs can be more easily automated. In short, much of the population in lower- and middle-income countries may be affected, severely impacting the lives of millions of people and threatening the capacity of poorer nations to prosper…(More)”.

Theorizing the functions and patterns of agency in the policymaking process


Paper by Giliberto Capano, et al: “Theories of the policy process understand the dynamics of policymaking as the result of the interaction of structural and agency variables. While these theories tend to conceptualize structural variables in a careful manner, agency (i.e. the actions of individual agents, like policy entrepreneurs, policy leaders, policy brokers, and policy experts) is left as a residual piece in the puzzle of the causality of change and stability. This treatment of agency leaves room for conceptual overlaps, analytical confusion and empirical shortcomings that can complicate the life of the empirical researcher and, most importantly, hinder the ability of theories of the policy process to fully address the drivers of variation in policy dynamics. Drawing on Merton’s concept of function, this article presents a novel theorization of agency in the policy process. We start from the assumption that agency functions are a necessary component through which policy dynamics evolve. We then theorise that agency can fulfil four main functions – steering, innovation, intermediation and intelligence – that need to be performed, by individual agents, in any policy process through four patterns of action – leadership, entrepreneurship, brokerage and knowledge accumulation – and we provide a roadmap for operationalising and measuring these concepts. We then demonstrate what can be achieved in terms of analytical clarity and potential theoretical leverage by applying this novel conceptualisation to two major policy process theories: the Multiple Streams Framework (MSF) and the Advocacy Coalition Framework (ACF)…(More)”.

The Access to Public Information: A Fundamental Right


Book by Alejandra Soriano Diaz: “Information is not only a human-fundamental right, but it has been shaped as a pillar for the exercise of other human rights around the world. It is the path for bringing to account authorities and other powerful actors before the people, who are, for all purposes, the actual owners of public data.

Providing information about public decisions that have the potential to significantly impact a community is vital to modern democracy. This book explores the forms in which individuals and collectives are able to voice their opinions and participate in public decision-making when long-lasting effects are at stake, on present and future generations. The strong correlation between the right to access public information and the enjoyment of civil and political rights, as well as economic and environmental rights, emphasizes their interdependence.

This study raises a number of important questions to mobilize towards openness and empowerment of people’s right of ownership of their public information…(More)”.

Digital Governance: Confronting the Challenges Posed by Artificial Intelligence


Book edited by Kostina Prifti, Esra Demir, Julia Krämer, Klaus Heine, and Evert Stamhuis: “This book explores the structure and frameworks of digital governance, focusing on various regulatory patterns, with the aim of tackling the disruptive impact of artificial intelligence (AI) technologies. Addressing the various challenges posed by AI technologies, this book explores potential avenues for crafting legal remedies and solutions, spanning liability of AI, platform governance, and the implications for data protection and privacy…(More)”.

Anticipatory Governance: Shaping a Responsible Future


Book edited by Melodena Stephens, Raed Awamleh and Frederic Sicre: “Anticipatory Governance is the systemic process of future shaping built on the understanding that the future is not a continuation of the past or present, thus making foresight a complex task requiring the engagement of the whole of government with its constituents in a constructive and iterative manner to achieve collective intelligence. Effective anticipatory governance amplifies the fundamental properties of agile government to build trust, challenge assumptions, and reach consensus. Moreover, anticipatory governance sets the foundation to adapt to exponential change. This seismic shift in the governance environment should lead to urgent rethinking of the ways and means governments and large corporate players formulate strategies, design processes, develop human capital and shape instiutional culture to achieve public value.

From a long-term multigenerational perspective, anticipatory governance is a key component to ensure guardrails for the future. Systems thinking is needed to harness our collective intelligence, by tapping into knowledge trapped within nations, organizations, and people. Many of the wicked problems governments and corporations are grappling with like artificial intelligence applications and ethics, climate change, refugee migration, education for future skills, and health care for all, require a “system of systems”, or anticipatory governance.

Yet, no matter how much we invest in foresight and shaping the future, we still need an agile government approach to manage unintended outcomes and people’s expectations. Crisis management which begins with listening to weak signals, sensemaking, intelligence management, reputation enhancement, and public value alignment and delivery, is critical. This book dives into the theory and practice of anticipatory governance and sets the agenda for future research…(More)”

The world of tomorrow


Essay by Virginia Postrel: “When the future arrived, it felt… ordinary. What happened to the glamour of tomorrow?

Progress used to be glamorous. For the first two thirds of the twentieth-century, the terms modern, future, and world of tomorrow shimmered with promise.

Glamour is more than a synonym for fashion or celebrity, although these things can certainly be glamorous. So can a holiday resort, a city, or a career. The military can be glamorous, as can technology, science, or the religious life. It all depends on the audience. Glamour is a form of communication that, like humor, we recognize by its characteristic effect. Something is glamorous when it inspires a sense of projection and longing: if only . . .

Whatever its incarnation, glamour offers a promise of escape and transformation. It focuses deep, often unarticulated longings on an image or idea that makes them feel attainable. Both the longings – for wealth, happiness, security, comfort, recognition, adventure, love, tranquility, freedom, or respect – and the objects that represent them vary from person to person, culture to culture, era to era. In the twentieth-century, ‘the future’ was a glamorous concept…

Much has been written about how and why culture and policy repudiated the visions of material progress that animated the first half of the twentieth-century, including a special issue of this magazine inspired by J Storrs Hall’s book Where Is My Flying Car? The subtitle of James Pethokoukis’s recent book The Conservative Futurist is ‘How to create the sci-fi world we were promised’. Like Peter Thiel’s famous complaint that ‘we wanted flying cars, instead we got 140 characters’, the phrase captures a sense of betrayal. Today’s techno-optimism is infused with nostalgia for the retro future.

But the most common explanations for the anti-Promethean backlash fall short. It’s true but incomplete to blame the environmental consciousness that spread in the late sixties…

How exactly today’s longings might manifest themselves, whether in glamorous imagery or real-life social evolution, is hard to predict. But one thing is clear: For progress to be appealing, it must offer room for diverse pursuits and identities, permitting communities with different commitments and values to enjoy a landscape of pluralism without devolving into mutually hostile tribes. The ideal of the one best way passed long ago. It was glamorous in its day but glamour is an illusion…(More)”.

The AI tool that can interpret any spreadsheet instantly


Article by Duncan C. McElfresh: “Say you run a hospital and you want to estimate which patients have the highest risk of deterioration so that your staff can prioritize their care1. You create a spreadsheet in which there is a row for each patient, and columns for relevant attributes, such as age or blood-oxygen level. The final column records whether the person deteriorated during their stay. You can then fit a mathematical model to these data to estimate an incoming patient’s deterioration risk. This is a classic example of tabular machine learning, a technique that uses tables of data to make inferences. This usually involves developing — and training — a bespoke model for each task. Writing in Nature, Hollmann et al.report a model that can perform tabular machine learning on any data set without being trained specifically to do so.

Tabular machine learning shares a rich history with statistics and data science. Its methods are foundational to modern artificial intelligence (AI) systems, including large language models (LLMs), and its influence cannot be overstated. Indeed, many online experiences are shaped by tabular machine-learning models, which recommend products, generate advertisements and moderate social-media content3. Essential industries such as healthcare and finance are also steadily, if cautiously, moving towards increasing their use of AI.

Despite the field’s maturity, Hollmann and colleagues’ advance could be revolutionary. The authors’ contribution is known as a foundation model, which is a general-purpose model that can be used in a range of settings. You might already have encountered foundation models, perhaps unknowingly, through AI tools, such as ChatGPT and Stable Diffusion. These models enable a single tool to offer varied capabilities, including text translation and image generation. So what does a foundation model for tabular machine learning look like?

Let’s return to the hospital example. With spreadsheet in hand, you choose a machine-learning model (such as a neural network) and train the model with your data, using an algorithm that adjusts the model’s parameters to optimize its predictive performance (Fig. 1a). Typically, you would train several such models before selecting one to use — a labour-intensive process that requires considerable time and expertise. And of course, this process must be repeated for each unique task.

Figure 1 | A foundation model for tabular machine learning. a, Conventional machine-learning models are trained on individual data sets using mathematical optimization algorithms. A different model needs to be developed and trained for each task, and for each data set. This practice takes years to learn and requires extensive time and computing resources. b, By contrast, a ‘foundation’ model could be used for any machine-learning task and is pre-trained on the types of data used to train conventional models. This type of model simply reads a data set and can immediately produce inferences about new data points. Hollmann et al. developed a foundation model for tabular machine learning, in which inferences are made on the basis of tables of data. Tabular machine learning is used for tasks as varied as social-media moderation and hospital decision-making, so the authors’ advance is expected to have a profound effect in many areas…(More)”

The Future of Jobs Report 2025


Report by the World Economic Forum: “Technological change, geoeconomic fragmentation, economic uncertainty, demographic shifts and the green transition – individually and in combination are among the major drivers expected to shape and transform the global labour market by 2030. The Future of Jobs Report 2025 brings together the perspective of over 1,000 leading global employers—collectively representing more than 14 million workers across 22 industry clusters and 55 economies from around the world—to examine how these macrotrends impact jobs and skills, and the workforce transformation strategies employers plan to embark on in response, across the 2025 to 2030 timeframe…(More)”.