AI-Powered Urban Innovations Bring Promise, Risk to Future Cities


Article by Anthony Townsend and Hubert Beroche: “Red lights are obsolete. That seems to be the thinking behind Google’s latest fix for cities, which rolled out late last year in a dozen cities around the world, from Seattle to Jakarta. Most cities still collect the data to determine the timing of traffic signals by hand. But Project Green Light replaced clickers and clipboards with mountains of location data culled from smartphones. Artificial intelligence crunched the numbers, adjusting the signal pattern to smooth the flow of traffic. Motorists saw 30% fewer delays. There’s just one catch. Even as pedestrian deaths in the US reached a 40-year high in 2022, Google engineers omitted pedestrians and cyclists from their calculations.

Google’s oversight threatens to undo a decade of progress on safe streets and is a timely reminder of the risks in store when AI invades the city. Mayors across global cities have embraced Vision Zero pledges to eliminate pedestrian deaths. They are trying to slow traffic down, not speed it up. But Project Green Light’s website doesn’t even mention road safety. Still, the search giant’s experiment demonstrates AI’s potential to help cities. Tailpipe greenhouse gas emissions at intersections fell by 10%. Imagine what AI could do if we used it to empower people in cities rather than ignore them.

Take the technocratic task of urban planning and the many barriers to participation it creates. The same technology that powers chatbots and deepfakes is rapidly bringing down those barriers. Real estate developers have mastered the art of using glossy renderings to shape public opinion. But UrbanistAI, a tool developed by Helsinki-based startup SPIN Unit and the Milanese software company Toretei, puts that power in the hands of residents: It uses generative AI to transform text prompts into photorealistic images of alternative designs for controversial projects. Another startup, the Barcelona-based Aino, wraps a chatbot around a mapping tool. Using such computer aids, neighborhood activists no longer need to hire a data scientist to produce maps from census data to make their case…(More)”.

Making Sense of Citizens’ Input through Artificial Intelligence: A Review of Methods for Computational Text Analysis to Support the Evaluation of Contributions in Public Participation


Paper by Julia Romberg and Tobias Escher: “Public sector institutions that consult citizens to inform decision-making face the challenge of evaluating the contributions made by citizens. This evaluation has important democratic implications but at the same time, consumes substantial human resources. However, until now the use of artificial intelligence such as computer-supported text analysis has remained an under-studied solution to this problem. We identify three generic tasks in the evaluation process that could benefit from natural language processing (NLP). Based on a systematic literature search in two databases on computational linguistics and digital government, we provide a detailed review of existing methods and their performance. While some promising approaches exist, for instance to group data thematically and to detect arguments and opinions, we show that there remain important challenges before these could offer any reliable support in practice. These include the quality of results, the applicability to non-English language corpuses and making algorithmic models available to practitioners through software. We discuss a number of avenues that future research should pursue that can ultimately lead to solutions for practice. The most promising of these bring in the expertise of human evaluators, for example through active learning approaches or interactive topic modeling…(More)”.

Artificial Intelligence: A Threat to Climate Change, Energy Usage and Disinformation


Press Release: “Today, partners in the Climate Action Against Disinformation coalition released a report that maps the risks that artificial intelligence poses to the climate crisis.

Topline points:

  • AI systems require an enormous amount of energy and water, and consumption is expanding quickly. Estimates suggest a doubling in 5-10 years.
  • Generative AI has the potential to turbocharge climate disinformation, including climate change-related deepfakes, ahead of a historic election year where climate policy will be central to the debate. 
  • The current AI policy landscape reveals a concerning lack of regulation on the federal level, with minor progress made at the state level, relying on voluntary, opaque and unenforceable pledges to pause development, or provide safety with its products…(More)”.

The Dark World of Citation Cartels


Article by Domingo Docampo: “In the complex landscape of modern academe, the maxim “publish or perish” has been gradually evolving into a different mantra: “Get cited or your career gets blighted.” Citations are the new academic currency, and careers now firmly depend on this form of scholarly recognition. In fact, citation has become so important that it has driven a novel form of trickery: stealth networks designed to manipulate citations. Researchers, driven by the imperative to secure academic impact, resort to forming citation rings: collaborative circles engineered to artificially boost the visibility of their work. In doing so, they compromise the integrity of academic discourse and undermine the foundation of scholarly pursuit. The story of the modern “citation cartel” is not just a result of publication pressure. The rise of the mega-journal also plays a role, as do predatory journals and institutional efforts to thrive in global academic rankings.

Over the past decade, the landscape of academic research has been significantly altered by the sheer number of scholars engaging in scientific endeavors. The number of scholars contributing to indexed publications in mathematics has doubled, for instance. In response to the heightened demand for space in scientific publications, a new breed of publishing entrepreneur has seized the opportunity, and the result is the rise of mega-journals that publish thousands of articles annually. Mathematics, an open-access journal produced by the Multidisciplinary Digital Publishing Institute, published more than 4,763 articles in 2023, making up 9.3 percent of all publications in the field, according to the Web of Science. It has an impact factor of 2.4 and an article-influence measure of just 0.37, but, crucially, it is indexed with Clarivate’s Web of Science, Elsevier’s Scopus, and other indexers, which means its citations count toward a variety of professional metrics. (By contrast, the Annals of Mathematics, published by Princeton University, contained 22 articles last year, and has an impact factor of 4.9 and an article-influence measure of 8.3.)..(More)”

The Judicial Data Collaborative


About: “We enable collaborations between researchers, technical experts, practitioners and organisations to create a shared vocabulary, standards and protocols for open judicial data sets, shared infrastructure and resources to host and explain available judicial data.

The objective is to drive and sustain advocacy on the quality and limitations of Indian judicial data and engage the judicial data community to enable cross-learning among various projects…

Accessibility and understanding of judicial data are essential to making courts and tribunals more transparent, accountable and easy to navigate for litigants. In recent years, eCourts services and various Court and tribunals’ websites have made a large volume of data about cases available. This has expanded the window into judicial functioning and enabled more empirical research on the role of courts in the protection of citizen’s rights. Such research can also assist busy courts understand patterns of litigation and practice and can help engage across disciplines with stakeholders to improve functioning of courts.

Some pioneering initiatives in the judicial data landscape include research such as DAKSH’s database; annual India Justice Reports; and studies of court functioning during the pandemic and quality of eCourts data; open datasets including Development Data Lab’s Judicial Data Portal containing District & Taluka court cases (2010-2018) and platforms that collect them such as Justice Hub; and interactive databases such as the Vidhi JALDI Constitution Bench Pendency Project…(More)”.

Once upon a bureaucrat: Exploring the role of stories in government


Article by Thea Snow: “When you think of a profession associated with stories, what comes to mind? Journalist, perhaps? Or author? Maybe, at a stretch, you might think about a filmmaker. But I would hazard a guess that “public servant” would unlikely be one of the first professions that come to mind. However, recent research suggests that we should be thinking more deeply about the connections between stories and government.

Since 2021, the Centre for Public Impact, in partnership with Dusseldorp Forum and Hands Up Mallee, has been exploring the role of storytelling in the context of place-based systems change work. Our first report, Storytelling for Systems Change: Insights from the Field, focused on the way communities use stories to support place-based change. Our second report, Storytelling for Systems Change: Listening to Understand, focused more on how stories are perceived and used by those in government who are funding and supporting community-led systems change initiatives.

To shape these reports, we have spent the past few years speaking to community members, collective impact backbone teams, storytelling experts, academics, public servants, data analysts, and more. Here’s some of what we’ve heard…(More)”.

Understanding and Measuring Hype Around Emergent Technologies


Article by Swaptik Chowdhury and Timothy Marler: “Inaccurate or excessive hype surrounding emerging technologies can have several negative effects, including poor decisionmaking by both private companies and the U.S. government. The United States needs a comprehensive approach to understanding and assessing public discourse–driven hype surrounding emerging technologies, but current methods for measuring technology hype are insufficient for developing policies to manage it. The authors of this paper describe an approach to analyzing technology hype…(More)”.

Evidence for policy-makers: A matter of timing and certainty?


Article by Wouter Lammers et al: “This article investigates how certainty and timing of evidence introduction impact the uptake of evidence by policy-makers in collective deliberations. Little is known about how experts or researchers should time the introduction of uncertain evidence for policy-makers. With a computational model based on the Hegselmann–Krause opinion dynamics model, we simulate how policy-makers update their opinions in light of new evidence. We illustrate the use of our model with two examples in which timing and certainty matter for policy-making: intelligence analysts scouting potential terrorist activity and food safety inspections of chicken meat. Our computations indicate that evidence should come early to convince policy-makers, regardless of how certain it is. Even if the evidence is quite certain, it will not convince all policy-makers. Next to its substantive contribution, the article also showcases the methodological innovation that agent-based models can bring for a better understanding of the science–policy nexus. The model can be endlessly adapted to generate hypotheses and simulate interactions that cannot be empirically tested…(More)”.

A World Divided Over Artificial Intelligence


Article by Aziz Huq: “…Through multinational communiqués and bilateral talks, an international framework for regulating AI does seem to be coalescing. Take a close look at U.S. President Joe Biden’s October 2023 executive order on AI; the EU’s AI Act, which passed the European Parliament in December 2023 and will likely be finalized later this year; or China’s slate of recent regulations on the topic, and a surprising degree of convergence appears. They have much in common. These regimes broadly share the common goal of preventing AI’s misuse without restraining innovation in the process. Optimists have floated proposals for closer international management of AI, such as the ideas presented in Foreign Affairs by the geopolitical analyst Ian Bremmer and the entrepreneur Mustafa Suleyman and the plan offered by Suleyman and Eric Schmidt, the former CEO of Google, in the Financial Times in which they called for the creation of an international panel akin to the UN’s Intergovernmental Panel on Climate Change to “inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming.”

But these ambitious plans to forge a new global governance regime for AI may collide with an unfortunate obstacle: cold reality. The great powers, namely, China, the United States, and the EU, may insist publicly that they want to cooperate on regulating AI, but their actions point toward a future of fragmentation and competition. Divergent legal regimes are emerging that will frustrate any cooperation when it comes to access to semiconductors, the setting of technical standards, and the regulation of data and algorithms. This path doesn’t lead to a coherent, contiguous global space for uniform AI-related rules but to a divided landscape of warring regulatory blocs—a world in which the lofty idea that AI can be harnessed for the common good is dashed on the rocks of geopolitical tensions…(More)”.

The Limits of Data


Essay by C.Thi Nguyen: “…Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes.  They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.

Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”

Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?…(More)”.