Do Policy Schools Still Have a Point?


Article by Stephen M. Walt: “Am I proposing that we toss out the current curriculum, stop teaching microeconomics, democratic theory, public accounting, econometrics, foreign policy, applied ethics, history, or any of the other building blocks of today’s public policy curriculum? Not yet. But we ought to devote more time and effort to preparing them for a world that is going to be radically different from the one we’ve known in the past—and sooner than they think.

I have three modest proposals.

First, and somewhat paradoxically, the prospect of radical change highlights the importance of basic theories. Empirical patterns derived from past experience (e.g., “democracies don’t fight each other”) may be of little value if the political and social conditions under which those laws were discovered no longer exist. To make sense of radically new circumstances, we will have to rely on causal explanations (i.e., theories) to help us foresee what is likely to occur and to anticipate the results of different policy choices. Knowledge derived from simplistic hypothesis testing or simple historical analogies will be less useful than rigorous and refined theories that tell us what’s causing what and help us understand the effects of different actions. Even more sophisticated efforts to teach “applied history” will fail if past events are not properly interpreted. The past never speaks to us directly; all historical interpretation is in some sense dependent on the theories or frameworks that we bring to these events. We need to know not just what happened in some earlier moment; we need to understand why it happened as it did and whether similar causal forces are at work today. Providing a causal explanation requires theory.

At the same time, some of our existing theories will need to be revised (or even abandoned), and new ones may need to be invented. We cannot escape reliance on some sort of theory, but rigid and uncritical adherence to a particular worldview can be just as dangerous as trying to operate solely with one’s gut instincts. For this reason, public policy schools should expose students to a wider range of theoretical approaches than they currently do and teach students how to think critically about them and to identify their limitations along with their strengths…(More)”.

The Branding Dilemma of AI: Steering Towards Efficient Regulation


Blog by Zeynep Engin: “…Undoubtedly, the term ‘Artificial Intelligence’ has captured the public imagination, proving to be an excellent choice from a marketing standpoint (particularly serving the marketing goals of big AI tech companies). However, this has not been without its drawbacks. The field has experienced several ‘AI winters’ when lofty promises failed to translate into real-world outcomes. More critically, this term has anthropomorphized what are, at their core, high-dimensional statistical optimization processes. Such representation has obscured their true nature and the extent of their potential. Moreover, as computing capacities have expanded exponentially, the ability of these systems to process large datasets quickly and precisely, identifying patterns autonomously, has often been misinterpreted as evidence of human-like or even superhuman intelligence. Consequently, AI systems have been elevated to almost mystical status, perceived as incomprehensible to humans and, thus, uncontrollable by humans…

A profound shift in the discourse surrounding AI is urgently necessary. The quest to replicate or surpass human intelligence, while technologically fascinating, does not fully encapsulate the field’s true essence and progress. Indeed, AI has seen significant advances, uncovering a vast array of functionalities. However, its core strength still lies in computational speed and precision — a mechanical prowess. The ‘magic’ of AI truly unfolds when this computational capacity intersects with the wealth of real-world data generated by human activities and the environment, transforming human directives into computational actions. Essentially, we are now outsourcing complex processing tasks to machines, moving beyond crafting bespoke solutions for each problem in favour of leveraging vast computational resources we have. This transition does not yield an ‘artificial intelligence’, but poses a new challenge to human intelligence in the knowledge creation cycle: the responsibility to formulate the ‘right’ questions and vigilantly monitor the outcomes of such intricate processing, ensuring the mitigation of any potential adverse impacts…(More)”.

The Data Revolution and the Study of Social Inequality: Promise and Perils


Paper by Mario L. Small: “The social sciences are in the midst of a revolution in access to data, as governments and private companies have accumulated vast digital records of rapidly multiplying aspects of our lives and made those records available to researchers. The accessibility and comprehensiveness of the data are unprecedented. How will the data revolution affect the study of social inequality? I argue that the speed, breadth, and low cost with which large-scale data can be acquired promise a dramatic transformation in the questions we can answer, but this promise can be undercut by size-induced blindness, the tendency to ignore important limitations amidst a source with billions of data points. The likely consequences for what we know about the social world remain unclear…(More)”.

The New Digital Dark Age


Article by Gina Neff: “For researchers, social media has always represented greater access to data, more democratic involvement in knowledge production, and great transparency about social behavior. Getting a sense of what was happening—especially during political crises, major media events, or natural disasters—was as easy as looking around a platform like Twitter or Facebook. In 2024, however, that will no longer be possible.

In 2024, we will face a grim digital dark age, as social media platforms transition away from the logic of Web 2.0 and toward one dictated by AI-generated content. Companies have rushed to incorporate large language models (LLMs) into online services, complete with hallucinations (inaccurate, unjustified responses) and mistakes, which have further fractured our trust in online information.

Another aspect of this new digital dark age comes from not being able to see what others are doing. Twitter once pulsed with publicly readable sentiment of its users. Social researchers loved Twitter data, relying on it because it provided a ready, reasonable approximation of how a significant slice of internet users behaved. However, Elon Musk has now priced researchers out of Twitter data after recently announcing that it was ending free access to the platform’s API. This made it difficult, if not impossible, to obtain data needed for research on topics such as public health, natural disaster response, political campaigning, and economic activity. It was a harsh reminder that the modern internet has never been free or democratic, but instead walled and controlled.

Closer cooperation with platform companies is not the answer. X, for instance, has filed a suit against independent researchers who pointed out the rise in hate speech on the platform. Recently, it has also been revealed that researchers who used Facebook and Instagram’s data to study the platforms’ role in the US 2020 elections had been granted “independence by permission” by Meta. This means that the company chooses which projects to share its data with and, while the research may be independent, Meta also controls what types of questions are asked and who asks them…(More)”.

Fairness and Machine Learning


Book by Solon Barocas, Moritz Hardt and Arvind Narayanan: “…introduces advanced undergraduate and graduate students to the intellectual foundations of this recently emergent field, drawing on a diverse range of disciplinary perspectives to identify the opportunities and hazards of automated decision-making. It surveys the risks in many applications of machine learning and provides a review of an emerging set of proposed solutions, showing how even well-intentioned applications may give rise to objectionable results. It covers the statistical and causal measures used to evaluate the fairness of machine learning models as well as the procedural and substantive aspects of decision-making that are core to debates about fairness, including a review of legal and philosophical perspectives on discrimination. This incisive textbook prepares students of machine learning to do quantitative work on fairness while reflecting critically on its foundations and its practical utility.

• Introduces the technical and normative foundations of fairness in automated decision-making
• Covers the formal and computational methods for characterizing and addressing problems
• Provides a critical assessment of their intellectual foundations and practical utility
• Features rich pedagogy and extensive instructor resources…(More)”

What It Takes to Build Democratic Institutions


Article by Daron Acemoglu: “Chile’s failure to draft a new constitution that enjoys widespread support from voters is the predictable result of allowing partisans and ideologues to lead the process. Democratic institutions are built by delivering what ordinary voters expect and demand from government, as the history of Nordic social democracy shows…

There are plenty of good models around to help both developing and industrialized countries build better democratic institutions. But with its abortive attempts to draft a new constitution, Chile is offering a lesson in what to avoid.

Though it is one of the richest countries in Latin America, Chile is still suffering from the legacy of General Augusto Pinochet’s brutal dictatorship and historic inequalities. The country has made some progress in building democratic institutions since the 1988 plebiscite that began the transition from authoritarianism, and education and social programs have reduced income inequality. But major problems remain. There are deep inequalities not just in income, but also in access to government services, high-quality educational resources, and labor-market opportunities. Moreover, Chile still has the constitution that Pinochet imposed in 1980.

Yet while it seems natural to start anew, Chile has gone about it the wrong way. Following a 2020 referendum that showed overwhelming support for drafting a new constitution, it entrusted the process to a convention of elected delegates. But only 43% of voters turned out for the 2021 election to fill the convention, and many of the candidates were from far-left circles with strong ideological commitments to draft a constitution that would crack down on business and establish myriad new rights for different communities. When the resulting document was put to a vote, 62% of Chileans rejected it…(More)”

What does it mean to trust a technology?


Article by Jack Stilgoe: “A survey published in October 2023 revealed what seemed to be a paradox. Over the past decade, self-driving vehicles have improved immeasurably, but public trust in the technology is low and falling. Only 37% of Americans said they would be comfortable riding in a self- driving vehicle, down from 39% in 2022 and 41% in 2021. Those that have used the technology express more enthusiasm, but the rest have seemingly had their confidence shaken by the failure of the technology to live up to its hype.

Purveyors and regulators of any new technology are likely to worry about public trust. In the short term, they worry that people won’t want to make use of new innovations. But they also worry that a public backlash might jeopardize not just a single company but a whole area of technological innovation. Excitement about artificial intelligence (AI) has been accompanied by a concern about the need to “build trust” in the technology. Trust—letting one’s guard down despite incomplete information—is vital, but innovators must not take it for granted. Nor can it be circumvented through clever engineering. When cryptocurrency enthusiasts call their technology “trustless” because they think it solves age-old problems of banking (an unavoidably imperfect social institution), we should at least view them with skepticism.

For those concerned about public trust and new technologies, social science has some important lessons. The first is that people trust people, not things. When we board an airplane or agree to get vaccinated, we are placing our trust not in these objects but in the institutions that govern them. We trust that professionals are well-trained; we trust that regulators have assessed the risks; we trust that, if something goes wrong, someone will be held accountable, harms will be compensated, and mistakes will be rectified. Societies can no longer rely on the face-to-face interactions that once allowed individuals to do business. So it is more important than ever that faceless institutions are designed and continuously monitored to realize the benefits of new technologies while mitigating the risks….(More)”.

How to craft fair, transparent data-sharing agreements


Article by Stephanie Kanowitz: “Data collaborations are critical to government decision-making, but actually sharing data can be difficult—not so much the mechanics of the collaboration, but hashing out the rules and policies governing it. A new report offers three resources that will make data sharing more straightforward, foster accountability and build trust among the parties.

“We’ve heard over and over again that one of the biggest barriers to collaboration around data turns out to be data sharing agreements,” said Stefaan Verhulst, co-founder of the Governance Lab at New York University and an author of the November report, “Moving from Idea to Practice.” It’s sometimes a lot to ask stakeholders “to provide access to some of their data,” he said.

To help, Verhulst and other researchers identified three components of successful data-sharing agreements: conducting principled negotiations, establishing the elements of a data-sharing agreement and assessing readiness.

To address the first, the report breaks the components of negotiation into a framework with four tenets: separating people from the problem, focusing on interests rather than positions, identifying options and using objective criteria. From discussions with stakeholders in data sharing agreement workshops that GovLab held through its Open Data Policy Lab, three principles emerged—fairness, transparency and reciprocity…(More)”.

Global Digital Data Governance: Polycentric Perspectives


(Open Access) Book edited by Carolina Aguerre, Malcolm Campbell-Verduyn, and Jan Aart Scholte: “This book provides a nuanced exploration of contemporary digital data governance, highlighting the importance of cooperation across sectors and disciplines in order to adapt to a rapidly evolving technological landscape. Most of the theory around global digital data governance remains scattered and focused on specific actors, norms, processes, or disciplinary approaches. This book argues for a polycentric approach, allowing readers to consider the issue across multiple disciplines and scales.

Polycentrism, this book argues, provides a set of lenses that tie together the variety of actors, issues, and processes intertwined in digital data governance at subnational, national, regional, and global levels. Firstly, this approach uncovers the complex array of power centers and connections in digital data governance. Secondly, polycentric perspectives bridge disciplinary divides, challenging assumptions and drawing together a growing range of insights about the complexities of digital data governance. Bringing together a wide range of case studies, this book draws out key insights and policy recommendations for how digital data governance occurs and how it might occur differently…(More)”.

The new star wars over satellites


Article by Peggy Hollinger: “There is a battle brewing in space. In one corner you have the billionaires building giant satellite broadband constellations in low earth orbit (LEO) — Elon Musk with SpaceX’s Starlink and Jeff Bezos with Project Kuiper. 

In the other corner stand the traditional fixed satellite operators such as ViaSat and SES — but also a number of nations increasingly uncomfortable with the way in which the new space economy is evolving. In other words, with the dominance of US mega constellations in a strategic region of space.

The first shots were fired in late November at the World Radiocommunications Conference in Dubai. Every four years, global regulators and industry meet to review international regulations on the use of radio spectrum. 

For those who have only a vague idea of what spectrum is, it is the name for the radio airwaves that carry data wirelessly to enable a vast range of services — from television broadcasting to WiFi, navigation to mobile communications.

Most people are inclined to think that the airwaves have infinite capacity to connect us. But, like water, spectrum is a finite resource and much of it has already been allocated to specific uses. So operators have to transmit signals on shared bands of spectrum — on the promise that their transmissions will not interfere with others. 

Now SpaceX, Kuiper and others operating in LEO are pushing to loosen rules designed to prevent their signals from interfering with those of traditional operators in higher orbits. These rules impose caps on the power used to transmit signals, which facilitate spectrum sharing but also constrain the amount of data they can send. LEO operators say the rules, designed 25 years ago, are outdated. They argue that new technology would allow higher power levels — and greater capacity for customers — without degrading networks of the traditional fixed satellite systems operating in geostationary orbit, at altitudes of 36,000km.

It is perhaps not a surprise that a proposal to make LEO constellations more competitive drew protests from geo operators. Some, such as US-based Hughes Network Systems, have admitted they are already losing customers to Starlink.

What was surprising, however, was the strong opposition from countries such as Brazil, Indonesia, Japan and others…(More)”.