Article by Aaron Sankin and Surya Mattu: “A software company sold a New Jersey police department an algorithm that was right less than 1% of the time
Crime predictions generated for the police department in Plainfield, New Jersey, rarely lined up with reported crimes, an analysis by The Markup has found, adding new context to the debate over the efficacy of crime prediction software.
Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.
We examined 23,631 predictions generated by Geolitica between Feb. 25 to Dec. 18, 2018 for the Plainfield Police Department (PD). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield PD. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.
Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.
“Why did we get PredPol? I guess we wanted to be more effective when it came to reducing crime. And having a prediction where we should be would help us to do that. I don’t know that it did that,” said Captain David Guarino of the Plainfield PD. “I don’t believe we really used it that often, if at all. That’s why we ended up getting rid of it.”…(More)’.
Blog by Sara Marcucci and Stefaan Verhulst: “…Migration is a dynamic phenomenon influenced by a variety of factors. As migration policies strive to keep pace with an ever-changing landscape, anticipating trends becomes increasingly pertinent. Traditionally, in the realm of anticipatory methods, a clear demarcation existed between foresight and forecast.
Forecast predominantly relies on quantitative techniques to predict future trends, utilizing historical data, mathematical models, and statistical analyses to provide numerical predictions applicable to the short-to-medium term, seeking to facilitate expedited policy making, resource allocation, and logistical planning.
Foresight methodologies conventionally leaned on qualitative insights to explore future possibilities, employing expert judgment, scenario planning, and holistic exploration to envision potential future scenarios. This qualitative approach has been characterized by a more long-term perspective, which seeks to explore a spectrum of potential futures in the long run.
More recently, this once-clear distinction between quantitative forecasting and qualitative foresight has begun to blur. New methodologies that embrace a mixed-method approach are emerging, challenging traditional paradigms and offering new pathways for understanding complex phenomena. Despite the evolution and the growing interest in these novel approaches, there currently exists no comprehensive taxonomy to guide practitioners in selecting the most appropriate method for their given objective. Moreover, due to the state-of-the-art, there is a need for primers delving into these modern methodologies, filling a gap in knowledge and resources that practitioners can leverage to enhance their forecasting and foresight endeavors…(More)”.
Article by Claudette Salinas Leyva et al: “Many of our institutions are focused on the short term. Whether corporations, government bodies, or even nonprofits, they tend to prioritize immediate returns and discount long-term value and sustainability. This myopia is behind planetary crises such as climate change and biodiversity loss and contributes to decision-making that harms the wellbeing of communities.
Policymakers worldwide are beginning to recognize the importance of governing for the long term. The United Nations is currently developing a Declaration on Future Generations to codify this approach. This collection of case studies profiles community-level institutions rooted in Indigenous traditions that focus on governing for the long term and preserving the interests of future generations…(More)”.
Article by Stefaan G. Verhulst: “We live at a moment of perhaps unprecedented global upheaval. From climate change to pandemics, from war to political disharmony, misinformation, and growing social inequality, policy and social change-makers today face not only new challenges but new types of challenges. In our increasingly complex and interconnected world, existing systems and institutions of governance, marked by hierarchical decision-making, are increasingly being replaced by overlapping nodes of multi-sector decision-making.
Data is proving critical to these new forms of decision-making, along with associated (and emerging) phenomena such as advanced analytics, machine learning, and artificial intelligence. Yet while the importance of data intelligence for policymakers is now widely recognized, there remain multiple challenges to operationalizing that insight–i.e., to move from data intelligence to decision intelligence.
DALL-E generated
In what follows, we explain what we mean by decision intelligence, and discuss why it matters. We then present six obstacles to better decision intelligence–challenges that prevent policymakers and others from translating insights into action. Finally, we end by offering one possible solution to these challenges: the concept of decision accelerator labs, operating on a hub and spoke model, and offering an innovative, interdisciplinary platform to facilitate the development of evidence-based, targeted solutions to public problems and dilemmas…(More)”.
Article by Sara Marcucci, Stefaan Verhulst, María Esther Cervantes, Elena Wüllhorst: “This blog is the first in a series that will be published weekly, dedicated to exploring innovative anticipatory methods for migration policy. Over the coming weeks, we will delve into various aspects of these methods, delving into their value, challenges, taxonomy, and practical applications.
This first blog serves as an exploration of the value proposition and challenges inherent in innovative anticipatory methods for migration policy. We delve into the various reasons why these methods hold promise for informing more resilient, and proactive migration policies. These reasons include evidence-based policy development, enabling policymakers to ground their decisions in empirical evidence and future projections. Decision-takers, users, and practitioners can benefit from anticipatory methods for policy evaluation and adaptation, resource allocation, the identification of root causes, and the facilitation of humanitarian aid through early warning systems. However, it’s vital to acknowledge the challengesassociated with the adoption and implementation of these methods, ranging from conceptual concerns such as fossilization, unfalsifiability, and the legitimacy of preemptive intervention, to practical issues like interdisciplinary collaboration, data availability and quality, capacity building, and stakeholder engagement. As we navigate through these complexities, we aim to shed light on the potential and limitations of anticipatory methods in the context of migration policy, setting the stage for deeper explorations in the coming blogs of this series…(More)”.
Blog by Darrel Ronald: “The definition for urban digital twins is too vague — so it is important to create a clearer picture of the types of urban digital twins that are available. Not all digital twins are the same and each one comes with features and capabilities, strengths and weakness, as well as appropriate and inappropriate use cases….
As shown in my proposed Urban Digital Twin Taxonomy above, I propose that we classify these products first based on their Main Functionality (the Use Case), then based on their Technology Platform. I highlight some of main products within the different categories and their product scope. Next, I detail the different types of twins and offer some brief strengths and weaknesses for each type. This taxonomy could apply to other industries such as architecture or manufacturing, but it is specifically applied to cities and urban development projects.
Article by Stefaan G. Verhulst and Artur Kluz: “Technology has always played a crucial role in human history, both in winning wars and building peace. Even Leonardo da Vinci, the genius of the Renaissance time, in his 1482 letter to Ludovico Il Moro Sforza, Duke of Milan promised to invent new technological warfare for attack or defense. While serving top military and political leaders, he was working on technological advancements that could potentially have a significant impact on geopolitics.
(Picture from @iwpg_la)
Today, we are living in exceptional times, where disruptive technologies such as AI, space-based technologies, quantum computing, and many others are leading to the reimagination of everything around us and transforming our lives, state interactions in the global arena, and wars. The next great industrial revolution may well be occurring over 250 miles above us in outer space and putting our world into a new perspective. This is not just a technological transformation; this is a social and human transformation.
Perhaps to a greater extent than ever since World War II, recent news has been dominated by talk of war, as well as the destructive power of AI for human existence. The headlines are of missiles and offensives in Ukraine, of possible — and catastrophic — conflict over Taiwan, and of AI as humanity’s biggest existential threat.
A critical difference between this era and earlier times of conflict is the potential role of technology for peace. Along with traditional weaponry and armaments, it is clear that new space, data, and various other information and communication technologies will play an increasingly prominent role in 21st-century conflicts, especially when combined.
Much of the discussion today focuses on the potential offensive capabilities of technology. In a recent report titled “Seven Critical Technologies for Winning the Next War”, CSIS highlighted that “the next war will be fought on a high-tech battlefield….The consequences of failure on any of these technologies are tremendous — they could make the difference between victory and defeat.”
However, in the following discussion, we shift our focus to a distinctly different aspect of technology — its potential to cultivate peace and prevent conflicts. We present seven forms of PeaceTech, which encompass technologies that can actively avert or alleviate conflicts. These technologies are part of a broader range of innovations that contribute to the greater good of society and foster the overall well-being of humanity.
The application of frontier technologies has speedy, broad, and impactful effects in building peace. From preventing military conflicts and disinformation, connecting people, facilitating dialogue, drone delivery of humanitarian aid, and solving water access conflicts, to satellite imagery to monitor human rights violations and monitor peacekeeping efforts; technology has demonstrated its strong footprint in building peace.
One important caveat is in order: readers may note the absence of data in the list below. We have chosen to include data as a cross-cutting category that applies across the seven technologies. This points to the ubiquity of data in today’s digital ecology. In an era of rapid datafication, data can no longer be classified as a single technology, but rather as an asset or toolembedded within virtually every other technology. (See our writings on the role of data for peace here)…(More)”.
Blog by Bloomberg Cities Network: “Data is more central than ever to improving service delivery, managing performance, and identifying opportunities that better serve residents. That’s why a growing number of cities are adding a new tool to their arsenal—the citywide data strategy—to provide teams with a holistic view of data efforts and then lay out a roadmap for scaling successful approaches throughout city hall.
These comprehensive strategies are increasingly “critical to help mayors reach their visions,” according to Amy Edward Holmes, executive director The Bloomberg Center for Government Excellence at John Hopkins University, which is helping dozens of cities across the Americas up their data games as part of the Bloomberg Philanthropies City Data Alliance (CDA).
Bloomberg Cities spoke with experts in the field and leaders in pioneering cities to learn more about the importance of citywide data strategies and how they can help:
Turn “pockets of promise” into citywide strengths;
Build upon and consolidate other citywide strategic efforts;
Improve performance management and service delivery;
Align staff data capabilities with city needs;
Drive lasting cultural change through leadership commitment…(More)”.
Blog by Ville Aula: “Evidence-based policymaking is a popular approach to policy that has received widespread public attention during the COVID-19 pandemic, as well as in the fight against climate change. It argues that policy choices based on rigorous, preferably scientific evidence should be given priority over choices based on other types of justification. However, delegating policymaking solely to researchers goes against the idea that policies are determined democratically.
In my recent article published in Policy & Politics: Evidence-based policymaking in the legislatures we explored the tension between politics and evidence in the national legislatures. While evidence-based policymaking has been extensively studied within governments, the legislative arena has received much less attention. The focus of the study was on understanding how legislators, legislative committees, and political parties together shape the use of evidence. We also wanted to explore how the interviewees understand timeliness and relevance of evidence, because lack of time is a key challenge within legislatures. The study is based on 39 interviews with legislators, party employees, and civil servants in Eduskunta, the national Parliament of Finland.
Our findings show that, in Finland, political parties play a key role in collecting, processing, and brokering evidence within legislatures. Finnish political parties maintain detailed policy programmes that guide their work in the legislature. The programmes are often based on extensive consultations with expert networks of the party and evidence collection from key stakeholders. Political parties are not ready to review these programmes every time new evidence is offered to them. This reluctance can give the appearance that parties do not want to follow evidence. Nevertheless, reluctance is oftens necessary for political parties to maintain stable policy platforms while navigating uncertainty amidst competing sources of evidence. Party positions can be based on extensive evidence and expertise even if some other sources of evidence contradict them.
Partisan expert networks and policy experts employed by political parties in particular appear to be crucial in formulating the evidence-base of policy programmes. The findings suggest that these groups should be a new target audience for evidence brokering. Yet political parties, their employees, and their networks have rarely been considered in research on evidence-based policymaking.
Turning to the question of timeliness we found, as expected, that use of evidence in the Parliament of Finland is driven by short-term reactiveness. However, in our study, we also found that short-term reactiveness and the notion of timeliness can refer to time windows ranging from months to weeks and, sometimes, merely days. The common recommendation by policy scholars to boost uptake of evidence by making it timely and relevant is therefore far from simple…(More)”.
Article by Ina Fried and Scott Rosenberg: “Scott RosenbergThe internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves.
What’s happening: Experts estimate that AI-generated content could account for as much as 90% of information on the internet in a few years’ time, as ChatGPT, Dall-E and similar programs spill torrents of verbiage and images into online spaces.
That’s happening in a world that hasn’t yet figured out how to reliably label AI-generated output and differentiate it from human-created content.
The danger to human society is the now-familiar problem of information overload and degradation.
AI turbocharges the ability to create mountains of new content while it undermines the ability to check that material for reliability and recycles biases and errors in the data that was used to train it.
There’s also widespread fear that AI could undermine the jobs of people who create content today, from artists and performers to journalists, editors and publishers. The current strike by Hollywood actors and writers underlines this risk.
The danger to AI itself is newer and stranger. A raft of recent research papers have introduced a novel lexicon of potential AI disorders that are just coming into view as the technology is more widely deployed and used.
“Model collapse” is researchers’ name for what happens to generative AI models, like OpenAI’s GPT-3 and GPT-4, when they’re trained using data produced by other AIs rather than human beings.
Feed a model enough of this “synthetic” data, and the quality of the AI’s answers can rapidly deteriorate, as the systems lock in on the most probable word choices and discard the “tail” choices that keep their output interesting.
“Model Autophagy Disorder,“ or MAD, is how one set of researchers at Rice and Stanford universities dubbed the result of AI consuming its own products.
“Habsburg AI” is what another researcher earlier this year labeled the phenomenon, likening it to inbreeding: “A system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.”…(More)”.