Generative AI is set to transform crisis management


Article by Ben Ellencweig, Mihir Mysore, Jon Spaner: “…Generative AI presents transformative potential, especially in disaster preparedness and response, and recovery. As billion-dollar disasters become more frequent – “billion-dollar disasters” typically costing the U.S. roughly $120 billion each – and “polycrises”, or multiple crises at once proliferate (e.g. hurricanes combined with cyber disruptions), the significant impact that Generative AI can have, especially with proper leadership focus, is a focal point of interest.

Generative AI’s speed is crucial in emergencies, as it enhances information access, decision-making capabilities, and early warning systems. Beyond organizational benefits for those who adopt Generative AI, its applications include real-time data analysis, scenario simulations, sentiment analysis, and simplifying complex information access. Generative AI’s versatility offers a wide variety of promising applications in disaster relief, and opens up facing real time analyses with tangible applications in the real world. 

Early warning systems and sentiment analysis: Generative AI excels in early warning systems and sentiment analysis, by scanning accurate real-time data and response clusters. By enabling connections between disparate systems, Generative AI holds the potential to provide more accurate early warnings. Integrated with traditional and social media, Generative AI can also offer precise sentiment analysis, empowering leaders to understand public sentiment, detect bad actors, identify misinformation, and tailor communications for accurate information dissemination.

Scenario simulations: Generative AI holds the potential to enhance catastrophe modeling for better crisis assessment and resource allocation. It creates simulations for emergency planners, improving modeling for various disasters (e.g., hurricanes, floods, wildfires) using historical data such as location, community impact, and financial consequence. Often, simulators perform work “so large that it exceeds human capacity (for example, finding flooded or unusable roads across a large area after a hurricane).” …(More)”

Narrative Corruptions


Review by Mike St. Thomas: “…The world outside academia has grown preoccupied with narrative recently. Despite the rise of Big Data (or perhaps because of it), we are more keenly aware of how we use stories to explain what happens in the world, wield political power, and understand ourselves. And we are discovering that these stories can be used for good or ill. From the resurgence of nationalism on the right to the rise of identity politics on the left, the stories we tell about ourselves matter a great deal. As marketing guru Annette Simmons puts it, “Whoever tells the best story wins.” The result has been, in part, the current polarization in American life. An obvious example is the persistence of the false narrative of a stolen election, but at a deeper level, more than ever we now seem inclined—conditioned, even—to judge everything with an up or down vote.

Brooks is less than thrilled about these developments. “It was as if a fledgling I had nourished had become a predator devouring reality in the name of story,” he writes at the outset of Seduced by Story, in a clear attempt to distance himself from what he sees as the abuses of narrative in the years since Reading for the Plot was published. Though his lament contains a strain of academic pearl-clutching, Brooks’s concern is warranted. A narrative is, by nature, a hermeneutic circle—the elements of a plot gaining significance through their relation to each other—and its ever-closing loop threatening to blind its audience to the real.

Though in his new book Brooks does not back down from the claims of his old, he argues that while stories may be unavoidable, they need to be examined and critiqued constantly. A banal thesis, perhaps, but still true. After a preliminary chapter that addresses corporate storytelling and the removal of Confederate monuments, he revisits terrain covered in Reading for the Plot by examining how narratives work, using examples from Victorian-era novelists such as Honoré de Balzac, Henry James, Marcel Proust, and Sir Arthur Conan Doyle.

Within Seduced by Story are the seeds of a more trenchant claim about the ultimate purpose of storytelling—and about how our narratives have become corrupted. Brooks recalls a musical advertising slogan from his youth: “If you’ve got the time / We’ve got the beer. Miller Beer.” Jingles like this were pithy, memorable, and quite effective at communicating a quality of the product, or, more likely, at appealing to a specific emotion of the listener…(More)”.

Evidence-Based Government Is Alive and Well


Article by Zina Hutton: “A desire to discipline the whimsical rule of despots.” That’s what Gary Banks, a former chairman of Australia’s Productivity Commission, attributed the birth of evidence-based policy to back in the 14th century in a speech from 2009. Evidence-based policymaking isn’t a new style of government, but it’s one with well-known roadblocks that elected officials have been working around in order to implement it more widely.

Evidence-based policymaking relies on evidence — facts, data, expert analysis — to shape aspects of long- and short-term policy decisions. It’s not just about collecting data, but also applying it and experts’ analysis to shape future policy. Whether it’s using school enrollment numbers to justify building a new park in a neighborhood or scientists collaborating on analysis of wastewater to try to “catch” illness spread in a community before it becomes unmanageable, evidence-based policy uses facts to help elected and appointed officials decide what funds and other resources to allocate in their communities.

Problems with evidence-based governing have been around for years. They range from a lack of communication between the people designing the policy and its related programs and the people implementing them, to the way that local government struggles to recruit and maintain employees. Resource allocation also shapes the decisions some cities make when it comes to seeking out and using data. This can be seen in the way larger cities, with access to proportionately larger budgets, research from state universities within city limits and a larger workforce, have had more success with evidence-based policymaking.
“The largest cities have more personnel, more expertise, more capacity, whether that’s for collecting administrative data and monitoring it, whether that’s doing open data portals, or dashboards, or whether that’s doing things like policy analysis or program evaluation,” says Karen Mossberger, the Frank and June Sackton Professor in the School of Public Affairs at Arizona State University. “It takes expert personnel, it takes people within government with the skills and the capacity, it takes time.”

Roadblocks aside, state and local governments are finding innovative ways to collaborate with one another on data-focused projects and policy, seeking ways to make up for the problems that impacted early efforts at evidence-based governance. More state and local governments now recruit data experts at every level to collect, analyze and explain the data generated by residents, aided by advances in technology and increased access to researchers…(More)”.

Who owns data about you?


Article by Wendy Wong: “The ascendancy of artificial intelligence hinges on vast data accrued from our daily activities. In turn, data train advanced algorithms, fuelled by massive amounts of computing power. Together, they form the critical trio driving AI’s capabilities. Because of its human sources, data raise an important question: who owns data, and how do the data add up when they’re about our mundane, routine choices?

It often helps to think through modern problems with historical anecdotes. The case of Henrietta Lacks, a Black woman living in Baltimore stricken with cervical cancer, and her everlasting cells, has become well-known because of Rebecca Skloot’s book, The Immortal Life of Henrietta Lacks,and a movie starring Oprah Winfrey. Unbeknownst to her, Lacks’s medical team removed her cancer cells and sent them to a lab to see if they would grow. While Lacks died of cancer in 1951, her cells didn’t. They kept going, in petri dishes in labs, all the way through to the present day.

The unprecedented persistence of Lacks’s cells led to the creation of the HeLa cell line. Her cells underpin various medical technologies, from in-vitro fertilization to polio and COVID-19 vaccines, generating immense wealth for pharmaceutical companies. HeLa is a co-creation. Without Lacks or scientific motivation, there would be no HeLa.

The case raises questions about consent and ownership. That her descendants recently settled a lawsuit against Thermo Fisher Scientific, a pharmaceutical company that monetized products made from HeLa cells, echoes the continuing discourse surrounding data ownership and rights. Until the settlement, just one co-creator was reaping all the financial benefits of that creation.

The Lacks family’s legal battle centred on a human-rights claim. Their situation was rooted in the impact of Lacks’s cells on medical science and the intertwined racial inequalities that lead to disparate medical outcomes. Since Lacks’s death, the family had struggled while biotech companies profited.

These “tissue issues” often don’t favour the individuals providing the cells or body parts. The U.S. Supreme Court case Moore v. Regents of the University of California deemed body parts as “garbage” once separated from the individual. The ruling highlights a harsh legal reality: Individuals don’t necessarily retain rights of parts of their body, financial or otherwise. Another federal case, Washington University v. Catalona, invalidated ownership claims based upon the “feeling” it belongs to the person it came from.

We can liken this characterization of body parts to how we often think about data taken from people. When we call data “detritus” or “exhaust,” we dehumanize the thoughts, behaviours and choices that generate those data. Do we really want to say that data, once created, is a resource for others’ exploitation?…(More)”.

NYC Releases Plan to Embrace AI, and Regulate It


Article by Sarah Holder: “New York City Mayor Eric Adams unveiled a plan for adopting and regulating artificial intelligence on Monday, highlighting the technology’s potential to “improve services and processes across our government” while acknowledging the risks.

The city also announced it is piloting an AI chatbot to answer questions about opening or operating a business through its website MyCity Business.

NYC agencies have reported using more than 30 tools that fit the city’s definition of algorithmic technology, including to match students with public schools, to track foodborne illness outbreaks and to analyze crime patterns. As the technology gets more advanced, and the implications of algorithmic bias, misinformation and privacy concerns become more apparent, the city plans to set policy around new and existing applications…

New York’s strategy, developed by the Office of Technology and Innovation with the input of city agency representatives and outside technology policy experts, doesn’t itself establish any rules and regulations around AI, but lays out a timeline and blueprint for creating them. It emphasizes the need for education and buy-in both from New York constituents and city employees. Within the next year, the city plans to start to hold listening sessions with the public, and brief city agencies on how and why to use AI in their daily operations. The city has also given itself a year to start work on piloting new AI tools, and two to create standards for AI contracts….

Stefaan Verhulst, a research professor at New York University and the co-founder of The GovLab, says that especially during a budget crunch, leaning on AI offers cities opportunities to make evidence-based decisions quickly and with fewer resources. Among the potential use cases he cited are identifying areas most in need of affordable housing, and responding to public health emergencies with data…(More) (Full plan)”.

Zero-Problem Philanthropy 


Article by Christian Seelos: “…problem-solving approaches often overlook the dynamics of problem supply, the ongoing creation of problems. This is apparent in daily news reports, which indicate that our societies generate both new and old problems at a faster rate than we can ever hope to solve them. Even solutions that “work” can have negative side-effects that then generate new problems. Climate change as an undesirable side-effect of the fantastic innovation of using fossil fuels for energy is an example. The live-saving invention of antibiotics has created mutated bacteria that now resist treatments. Indebted households, violence against poor women, and alcoholism can be the side-effect of providing innovative microfinance solutions that are well intended. These side effects require additional solutions that are often urgent and costly, leading to a never-ending cycle of problems and solutions.

Unfortunately, our blind faith in solutions and the capabilities of new technologies can lead to a careless attitude towards creating problems. We tend to overlook the importance of problems as indicators of deeper issues, instead glorifying the innovators and their solutions. This mindset can be problematic, as it reduces our role as philanthropists to playing catch-up and fails to acknowledge the possibility of fundamental flaws in our approach.

Russell Ackoff, a pioneering systems thinker and organization scholar, famously described the dangers of thinking in terms of problem-solving because “we walk into the future facing the past—we move away from, rather than toward, something. This often results in unforeseen consequences that are more distasteful than the deficiencies removed.” Ackoff highlights our tendency to be reactive rather than proactive in addressing social problems. What would it take to shift from a reactive, past-oriented solution perspective to a proactive philanthropy oriented towards a healthy future that does not create so many problems?…(More)”.

Think, before you nudge: those who pledge to eco-friendly diets respond more effectively to a nudge


Article (and paper) by Sanchayan Banerjee: “We appreciate the incredible array of global cuisines available to us. Despite the increasing prices, we enjoy a wide variety of food options, including an abundance of meats that our grandparents could only dream of, given their limited access. However, this diverse culinary landscape comes with a price – the current food choices significantly contribute to carbon emissions and conflict with our climate objectives. Therefore, transitioning towards more eco-friendly diets is crucial.

Instead of imposing strict measures or raising costs, researchers have employed subtle “nudges”, those that gently steer individuals toward socially beneficial choices, to reduce meat consumption. These nudges aim to modify how food choices are presented to consumers without imposing choices on them. Nevertheless, expanding the use of these nudges has proven to be a complex task in general, as it sometimes raises ethical concerns about whether people are fully aware of the messages encouraging them to change their behaviour. In the context of diets which are personal, researchers have argued nudging can be ethically dubious. What business do we have in telling people what to eat?

To address these challenges, a novel approach in behavioral science, known as “nudge+”, can empower individuals to reflect on their choices and encourage meaningful shifts towards more environmentally friendly behaviours. A nudge+ is a combination of a nudge with an encouragement to think…(More)”.

How a billionaire-backed network of AI advisers took over Washington


Article by Brendan Bordelon: “An organization backed by Silicon Valley billionaires and tied to leading artificial intelligence firms is funding the salaries of more than a dozen AI fellows in key congressional offices, across federal agencies and at influential think tanks.

The fellows funded by Open Philanthropy, which is financed primarily by billionaire Facebook co-founder and Asana CEO Dustin Moskovitz and his wife Cari Tuna, are already involved in negotiations that will shape Capitol Hill’s accelerating plans to regulate AI. And they’re closely tied to a powerful influence network that’s pushing Washington to focus on the technology’s long-term risks — a focus critics fear will divert Congress from more immediate rules that would tie the hands of tech firms.

Acting through the little-known Horizon Institute for Public Service, a nonprofit that Open Philanthropy effectively created in 2022, the group is funding the salaries of tech fellows in key Senate offices, according to documents and interviews…Current and former Horizon AI fellows with salaries funded by Open Philanthropy are now working at the Department of Defense, the Department of Homeland Security and the State Department, as well as in the House Science Committee and Senate Commerce Committee, two crucial bodies in the development of AI rules. They also populate key think tanks shaping AI policy, including the RAND Corporation and Georgetown University’s Center for Security and Emerging Technology, according to the Horizon web site…

In the high-stakes Washington debate over AI rules, Open Philanthropy has long been focused on one slice of the problem — the long-term threats that future AI systems might pose to human survival. Many AI thinkers see those as science-fiction concerns far removed from the current AI harms that Washington should address. And they worry that Open Philanthropy, in concert with its web of affiliated organizations and experts, is shifting the policy conversation away from more pressing issues — including topics some leading AI firms might prefer to keep off the policy agenda…(More)”.

Deliberation is no silver bullet for the ‘problem’ of populism


Article by Kristof Jacobs: “Populists are not satisfied with the way democracy works nowadays. They do not reject liberal democracy outright, but want it to change. Indeed, they feel the political elite is unresponsive. Not surprisingly, then, populist parties thrive in settings where there is widespread feeling that politicians do not listen to the people.

What if… decision-makers gave citizens a voice in the decision-making process? In fact, this is happening across the globe. Democratic innovations, that is: decision-making processes that aim to deepen citizens’ participation and engagement in political decision-making, are ever more popular. They come in many shapes and forms, such as referendums, deliberative mini-publics or participatory budgeting. Deliberative democratic innovations in particular are popular, as is evidenced by the many nation-level citizens’ assemblies on climate change. We have seen such assemblies not only in France, but also in the UK, Germany, Ireland, Luxembourg, Denmark, Spain and Austria.

Several prominent scholars of deliberation contend that deliberation promotes considered judgment and counteracts populism

Scholars of deliberation are optimistic about the potential of such deliberative events. In one often-cited piece in Science, several prominent scholars of deliberation contend that ‘[d]eliberation promotes considered judgment and counteracts populism’.

But is that optimism warranted? What does the available empirical research tell us? To examine this, one must distinguish between populist citizens and populist parties…(More)”.

How ChatGPT and other AI tools could disrupt scientific publishing


Article by Gemma Conroy: “When radiologist Domenico Mastrodicasa finds himself stuck while writing a research paper, he turns to ChatGPT, the chatbot that produces fluent responses to almost any query in seconds. “I use it as a sounding board,” says Mastrodicasa, who is based at the University of Washington School of Medicine in Seattle. “I can produce a publication-ready manuscript much faster.”

Mastrodicasa is one of many researchers experimenting with generative artificial-intelligence (AI) tools to write text or code. He pays for ChatGPT Plus, the subscription version of the bot based on the large language model (LLM) GPT-4, and uses it a few times a week. He finds it particularly useful for suggesting clearer ways to convey his ideas. Although a Nature survey suggests that scientists who use LLMs regularly are still in the minority, many expect that generative AI tools will become regular assistants for writing manuscripts, peer-review reports and grant applications.

Those are just some of the ways in which AI could transform scientific communication and publishing. Science publishers are already experimenting with generative AI in scientific search tools and for editing and quickly summarizing papers. Many researchers think that non-native English speakers could benefit most from these tools. Some see generative AI as a way for scientists to rethink how they interrogate and summarize experimental results altogether — they could use LLMs to do much of this work, meaning less time writing papers and more time doing experiments…(More)”.