Paper by Amanda Machin: “Stymied by preoccupation with short-term interests of individualist consumers, democratic institutions seem unable to generate sustained political commitment for tackling climate change. The citizens’ assembly (CA) is promoted as an important tool in combatting this “democratic myopia.” The aim of a CA is to bring together a representative group of citizens and experts from diverse backgrounds to exchange their different insights and perspectives on a complex issue. By providing the opportunity for inclusive democratic deliberation, the CA is expected to educate citizens, stimulate awareness of complex issues, and produce enlightened and legitimate policy recommendations. However, critical voices warn about the simplified and celebratory commentary surrounding the CA. Informed by agonistic and radical democratic theory, this paper elaborates on a particular concern, which is the orientation toward consensus in the CA. The paper points to the importance of disagreement in the form of both agony (from inside) and rupture (from outside) that, it is argued, is crucial for a democratic, engaging, passionate, creative, and representative sustainability politics…(More)”.
Urban AI Guide
Guide by Popelka, S., Narvaez Zertuche, L., Beroche, H.: “The idea for this guide arose from conversations with city leaders, who were confronted with new technologies, like artificial intelligence, as a means of solving complex urban problems, but who felt they lacked the background knowledge to properly engage with and evaluate the solutions. In some instances, this knowledge gap produced a barrier to project implementation or led to unintended project outcomes.
The guide begins with a literature review, presenting the state of the art in research on urban artificial intelligence. It then diagrams and describes an “urban AI anatomy,” outlining and explaining the components that make up an urban AI system. Insights from experts in the Urban AI community enrich this section, illuminating considerations involved in each component. Finally, the guide concludes with an in-depth examination of three case studies: water meter lifecycle in Winnipeg, Canada, curb digitization and planning in Los Angeles, USA, and air quality monitoring in Vilnius, Lithuania. Collectively, the case studies highlight the diversity of ways in which artificial intelligence can be operationalized in urban contexts, as well as the steps and requirements necessary to implement an urban AI project.
Since the field of urban AI is constantly evolving, we anticipate updating the guide annually. Please consider filling out the contribution form, if you have an urban AI use case that has been operationalized. We may contact you to include the use case as a case study in a future edition of the guide.
As a continuation of the guide, we offer customized workshops on urban AI, oriented toward municipalities and other urban stakeholders, who are interested in learning more about how artificial intelligence interacts in urban environments. Please contact us if you would like more information on this program…(More)”.
It’s Time to Rethink the Idea of “Indigenous”
Essay by Manvir Singh: “Identity evolves. Social categories shrink or expand, become stiffer or more elastic, more specific or more abstract. What it means to be white or Black, Indian or American, able-bodied or not shifts as we tussle over language, as new groups take on those labels and others strip them away.
On August 3, 1989, the Indigenous identity evolved. Moringe ole Parkipuny, a Maasai activist and a former member of the Tanzanian Parliament, spoke before the U.N. Working Group on Indigenous Populations, in Geneva—the first African ever to do so. “Our cultures and way of life are viewed as outmoded, inimical to national pride, and a hindrance to progress,” he said. As a result, pastoralists like the Maasai, along with hunter-gatherers, “suffer from common problems which characterize the plight of indigenous peoples throughout the world. The most fundamental rights to maintain our specific cultural identity and the land that constitutes the foundation of our existence as a people are not respected by the state and fellow citizens who belong to the mainstream population.”
Parkipuny’s speech was the culmination of an astonishing ascent. Born in a remote village near Tanzania’s Rift Valley, he attended school after British authorities demanded that each family “contribute” a son to be educated. His grandfather urged him to flunk out, but he refused. “I already had a sense of how Maasai were being treated,” he told the anthropologist Dorothy Hodgson in 2005. “I decided I must go on.” He eventually earned an M.A. in development studies from the University of Dar es Salaam.
In his master’s thesis, Parkipuny condemned the Masai Range Project, a twenty-million-dollar scheme funded by the U.S. Agency for International Development to boost livestock productivity. Naturally, then, U.S.A.I.D. was resistant when the Tanzanian government hired him to join the project. In the end, he was sent to the United States to learn about “proper ranches.” He travelled around until, one day, a Navajo man invited him to visit the Navajo Nation, the reservation in the Southwest.
“I stayed with them for two weeks, and then with the Hopi for two weeks,” he told Hodgson. “It was my first introduction to the indigenous world. I was struck by the similarities of our problems.” The disrepair of the roads reminded him of the poor condition of cattle trails in Maasailand…
By the time Parkipuny showed up in Geneva, the concept of “indigenous” had already undergone major transformations. The word—from the Latin indigena, meaning “native” or “sprung from the land”—has been used in English since at least 1588, when a diplomat referred to Samoyed peoples in Siberia as “Indigenæ, or people bred upon that very soyle.” Like “native,” “indigenous” was used not just for people but for flora and fauna as well, suffusing the term with an air of wildness and detaching it from history and civilization. The racial flavor intensified during the colonial period until, again like “native,” “indigenous” served as a partition, distinguishing white settlers—and, in many cases, their slaves—from the non-Europeans who occupied lands before them….When Parkipuny showed up in Geneva, activists were consciously remodelling indigeneity to encompass marginalized peoples worldwide, including, with Parkipuny’s help, in Africa.
Today, nearly half a billion people qualify as Indigenous…(More)”.
Democracy Report 2023: Defiance in the Face of Autocratization
New report by Varieties of Democracy (V-Dem): “.. the largest global dataset on democracy with over 31 million data points for 202 countries from 1789 to 2022. Involving almost 4,000 scholars and other country experts, V-Dem measures hundreds of different attributes of democracy. V-Dem enables new ways to study the nature, causes, and consequences of democracy embracing its multiple meanings. THE FIRST SECTION of the report shows global levels of democ- racy sliding back and advances made over the past 35 years diminishing. Most of the drastic changes have taken place within the last ten years, while there are large regional variations in relation to the levels of democracy people experience. The second section offers analyses on the geographies and population sizes of democratizing and autocratizing countries. In the third section we focus on the countries undergoing autocratization, and on the indicators deteriorating the most, including in relation to media censorship, repression of civil society organizations, and academic freedom. While disinformation, polarization, and autocratization reinforce each other, democracies reduce the spread of disinformation. This is a sign of hope, of better times ahead. And this is precisely the message carried forward in the fourth section, where we switch our focus to examples of countries that managed to push back and where democracy resurfaces again. Scattered over the world, these success stories share common elements that may bear implications for international democracy support and protection efforts. The final section of this year’s report offers a new perspective on shifting global balances of economic and trade power as a result of autocratization…(More)”.
When Ideology Drives Social Science
Article by Michael Jindra and Arthur Sakamoto: Last summer in these pages, Mordechai Levy-Eichel and Daniel Scheinerman uncovered a major flaw in Richard Jean So’s Redlining Culture: A Data History of Racial Inequality and Postwar Fiction, one that rendered the book’s conclusion null and void. Unfortunately, what they found was not an isolated incident. In complex areas like the study of racial inequality, a fundamentalism has taken hold that discourages sound methodology and the use of reliable evidence about the roots of social problems.
We are not talking about mere differences in interpretation of results, which are common. We are talking about mistakes so clear that they should cause research to be seriously questioned or even disregarded. A great deal of research — we will focus on examinations of Asian American class mobility — rigs its statistical methods in order to arrive at ideologically preferred conclusions.
Most sophisticated quantitative work in sociology involves multivariate research, often in a search for causes of social problems. This work might ask how a particular independent variable (e.g., education level) “causes” an outcome or dependent variable (e.g., income). Or it could study the reverse: How does parental income influence children’s education?
Human behavior is too complicated to be explained by only one variable, so social scientists typically try to “control” for various causes simultaneously. If you are trying to test for a particular cause, you want to isolate that cause and hold all other possible causes constant. One can control for a given variable using what is called multiple regression, a statistical tool that parcels out the separate net effects of several variables simultaneously.
If you want to determine whether income causes better education outcomes, you’d want to compare everyone from a two-parent family, since family status might be another causal factor, for instance. You’d also want to see the effect of family status by comparing everyone with similar incomes. And so on for other variables.
The problem is that there are potentially so many variables that a researcher inevitably leaves some out…(More)”.
The False Promise of ChatGPT
Article by Noam Chomsky: “…OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach…(More)”.
Nudging: A Tool to Influence Human Behavior in Health Policy
Book by František Ochrana and Radek Kovács: “Behavioral economics sees “nudges” as ways to encourage people to re-evaluate their priorities in such a way that they voluntarily change their behavior, leading to personal and social benefits. This book examines nudging as a tool for influencing human behavior in health policy. The authors investigate the contemporary scientific discourse on nudging and enrich it with an ontological, epistemological, and praxeological analysis of human behavior. Based on analyses of the literature and a systemic review, the book defines nudging tools within the paradigm of prospect theory. In addition to the theoretical contribution, Nudging also examines and offers suggestions on the practice of health policy regarding obesity, malnutrition, and especially type 2 diabetes mellitus…(More)”.
The Future of Compute
Independent Review by a UK Expert Panel: “…Compute is a material part of modern life. It is among the critical technologies lying behind innovation, economic growth and scientific discoveries. Compute improves our everyday lives. It underpins all the tools, services and information we hold on our handheld devices – from search engines and social media, to streaming services and accurate weather forecasts. This technology may be invisible to the public, but life today would be very different without it.
Sectors across the UK economy, both new and old, are increasingly reliant upon compute. By leveraging the capability that compute provides, businesses of all sizes can extract value from the enormous quantity of data created every day; reduce the cost and time required for research and development (R&D); improve product design; accelerate decision making processes; and increase overall efficiency. Compute also enables advancements in transformative technologies, such as AI, which themselves lead to the creation of value and innovation across the economy. This all translates into higher productivity and profitability for businesses and robust economic growth for the UK as a whole.
Compute powers modelling, simulations, data analysis and scenario planning, and thereby enables researchers to develop new drugs; find new energy sources; discover new materials; mitigate the effects of climate change; and model the spread of pandemics. Compute is required to tackle many of today’s global challenges and brings invaluable benefits to our society.
Compute’s effects on society and the economy have already been and, crucially, will continue to be transformative. The scale of compute capabilities keeps accelerating at pace. The performance of the world’s fastest compute has grown by a factor of 626 since 2010. The compute requirements of the largest machine learning models has grown 10 billion times over the last 10 years. We expect compute demand to significantly grow as compute capability continues to increase. Technology today operates very differently to 10 years ago and, in a decade’s time, it will have changed once again.
Yet, despite compute’s value to the economy and society, the UK lacks a long-term vision for compute…(More)”.
Access to Data for Environmental Purposes: Setting the Scene and Evaluating Recent Changes in EU Data Law
Paper by Michèle Finck, and Marie-Sophie Mueller: “Few policy issues will be as defining to the EU’s future as its reaction to environmental decline, on the one hand, and digitalisation, on the other. Whereas the former will shape the (quality of) life and health of humans, animals and plants, the latter will define the future competitiveness of the internal market and relatedly, also societal justice and cohesion. Yet, to date, the interconnections between these issues are rarely made explicit, as evidenced by the European Commission’s current policy agendas on both matters. With this article, we hope to contribute to, ideally, a soon growing conversation about how to effectively bridge environmental protection and digitalisation. Specifically, we examine how EU law shapes the options of using data—the lifeblood of the digital economy—for environmental sustainability purposes, and ponder the impact of on-going legislative reform…(More)”.
Suspicion Machines
Lighthouse Reports: “Governments all over the world are experimenting with predictive algorithms in ways that are largely invisible to the public. What limited reporting there has been on this topic has largely focused on predictive policing and risk assessments in criminal justice systems. But there is an area where even more far-reaching experiments are underway on vulnerable populations with almost no scrutiny.
Fraud detection systems are widely deployed in welfare states ranging from complex machine learning models to crude spreadsheets. The scores they generate have potentially life-changing consequences for millions of people. Until now, public authorities have typically resisted calls for transparency, either by claiming that disclosure would increase the risk of fraud or to protect proprietary technology.
The sales pitch for these systems promises that they will recover millions of euros defrauded from the public purse. And the caricature of the benefit cheat is a modern take on the classic trope of the undeserving poor and much of the public debate in Europe — which has the most generous welfare states — is intensely politically charged.
The true extent of welfare fraud is routinely exaggerated by consulting firms, who are often the algorithm vendors, talking it up to near 5 percent of benefits spending while some national auditors’ offices estimate it at between 0.2 and 0.4 of spending. Distinguishing between honest mistakes and deliberate fraud in complex public systems is messy and hard.
When opaque technologies are deployed in search of political scapegoats the potential for harm among some of the poorest and most marginalised communities is significant.
Hundreds of thousands of people are being scored by these systems based on data mining operations where there has been scant public consultation. The consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out…(More)”.