Hurricane Ian Destroyed Their Homes. Algorithms Sent Them Money


Article by Chris Stokel-Walker: “The algorithms that power Skai’s damage assessments are trained by manually labeling satellite images of a couple of hundred buildings in a disaster-struck area that are known to have been damaged. The software can then, at speed, detect damaged buildings across the whole affected area. A research paper on the underlying technology presented at a 2020 academic workshop on AI for disaster response claimed the auto-generated damage assessments match those of human experts with between 85 and 98 percent accuracy.

In Florida this month, GiveDirectly sent its push notification offering $700 to any user of the Providers app with a registered address in neighborhoods of Collier, Charlotte, and Lee Counties where Google’s AI system deemed more than 50 percent of buildings had been damaged. So far, 900 people have taken up the offer, and half of those have been paid. If every recipient takes up GiveDirectly’s offer, the organization will pay out $2.4 million in direct financial aid.

Some may be skeptical of automated disaster response. But in the chaos after an event like a hurricane making landfall, the conventional, human response can be far from perfect. Diaz points to an analysis GiveDirectly conducted looking at their work after Hurricane Harvey, which hit Texas and Louisiana in 2017, before the project with Google. Two out of the three areas that were most damaged and economically depressed were initially overlooked. A data-driven approach is “much better than what we’ll have from boots on the ground and word of mouth,” Diaz says.

GiveDirectly and Google’s hands-off, algorithm-led approach to aid distribution has been welcomed by some disaster assistance experts—with caveats. Reem Talhouk, a research fellow at Northumbria University’s School of Design and Centre for International Development in the UK, says that the system appears to offer a more efficient way of delivering aid. And it protects the dignity of recipients, who don’t have to queue up for handouts in public…(More)”.

The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future


Book by Orly Lobel: “Much has been written about the challenges tech presents to equality and democracy. But we can either criticize big data and automation or steer it to do better. Lobel makes a compelling argument that while we cannot stop technological development, we can direct its course according to our most fundamental values.
 
With provocative insights in every chapter, Lobel masterfully shows that digital technology frequently has a comparative advantage over humans in detecting discrimination, correcting historical exclusions, subverting long-standing stereotypes, and addressing the world’s thorniest problems: climate, poverty, injustice, literacy, accessibility, speech, health, and safety. 
 
Lobel’s vivid examples—from labor markets to dating markets—provide powerful evidence for how we can harness technology for good. The book’s incisive analysis and elegant storytelling will change the debate about technology and restore human agency over our values…(More)”.

The Transformations of Science


Essay by Geoff Anders: “In November of 1660, at Gresham College in London, an invisible college of learned men held their first meeting after 20 years of informal collaboration. They chose their coat of arms: the royal crown’s three lions of England set against a white backdrop. Their motto: “Nullius in verba,” or “take no one’s word for it.” Three years later, they received a charter from King Charles II and became what was and remains the world’s preeminent scientific institution: the Royal Society.

Three and a half centuries later, in July of 2021, even respected publications began to grow weary of a different, now constant refrain: “Trust the science.” It was a mantra everyone was supposed to accept, repeated again and again, ad nauseum

This new motto was the latest culmination of a series of transformations science has undergone since the founding of the Royal Society, reflecting the changing nature of science on one hand, and its expanding social role on the other. 

The present world’s preeminent system of thought now takes science as a central pillar and wields its authority to great consequence. But the story of how that came to be is, as one might expect, only barely understood…

There is no essential conflict between the state’s use of the authority of science and the health of the scientific enterprise itself. It is easy to imagine a well-funded and healthy scientific enterprise whose authority is deployed appropriately for state purposes without undermining the operation of science itself.

In practice, however, there can be a tension between state aims and scientific aims, where the state wants actionable knowledge and the imprimatur of science, often far in advance of the science getting settled. This is especially likely in response to a disruptive phenomenon that is too new for the science to have settled yet—for example, a novel pathogen with unknown transmission mechanisms and health effects.

Our recent experience of the pandemic put this tension on display, with state recommendations moving against masks, and then for masks, as the state had to make tactical decisions about a novel threat with limited information. In each case, politicians sought to adorn the recommendations with the authority of settled science; an unfortunate, if understandable, choice.

This joint partnership of science and the state is relatively new. One question worth asking is whether the development was inevitable. Science had an important flaw in its epistemic foundation, dating back to Boyle and the Royal Society—its failure to determine the proper conditions and use of scientific authority. “Nullius in verba” made some sense in 1660, before much science was settled and when the enterprise was small enough that most natural philosophers could personally observe or replicate the experiments of the others. It came to make less sense as science itself succeeded, scaled up, and acquired intellectual authority. Perhaps a better answer to the question of scientific authority would have led science to take a different course.

Turning from the past to the future, we now face the worrying prospect that the union of science and the state may have weakened science itself. Some time ago, commentators raised the specter of scientific slowdown, and more recent analysis has provided further justification for these fears. Why is science slowing? To put it simply, it may be difficult to have science be both authoritative and exploratory at the same time.

When scientists are meant to be authoritative, they’re supposed to know the answer. When they’re exploring, it’s okay if they don’t. Hence, encouraging scientists to reach authoritative conclusions prematurely may undermine their ability to explore—thereby yielding scientific slowdown. Such a dynamic may be difficult to detect, since the people who are supposed to detect it might themselves be wrapped up in a premature authoritative consensus…(More)”.

“Can AI bring deliberative democracy to the masses?”


Paper by Hélène Landemore: “A core problem in deliberative democracy is the tension between two seemingly equally important conditions of democratic legitimacy: deliberation on the one hand and mass participation on the other. Might artificial intelligence help bring quality deliberation to the masses? The paper first examines the conundrum in deliberative democracy around the tradeoff between deliberation and mass participation by returning to the seminal debate between Joshua Cohen and Jürgen Habermas about the proper model of deliberative democracy. It then turns to an analysis of the 2019 French Great National Debate, a low-tech attempt to involve millions of French citizens in a structured exercise of collective deliberation over a two-month period. Building on the shortcomings of this empirical attempt, the paper then considers two different visions for an algorithm-powered scaled-up form of mass deliberation—Mass Online Deliberation on the one hand and a multiplicity of rotating randomly selected mini-publics on the other—theorizing various ways Artificial Intelligence could play a role in either of them…(More)”.

The European Union-U.S. Data Privacy Framework


White House Fact Sheet: “Today, President Biden signed an Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities (E.O.) directing the steps that the United States will take to implement the U.S. commitments under the European Union-U.S. Data Privacy Framework (EU-U.S. DPF) announced by President Biden and European Commission President von der Leyen in March of 2022. 

Transatlantic data flows are critical to enabling the $7.1 trillion EU-U.S. economic relationship.  The EU-U.S. DPF will restore an important legal basis for transatlantic data flows by addressing concerns that the Court of Justice of the European Union raised in striking down the prior EU-U.S. Privacy Shield framework as a valid data transfer mechanism under EU law. 

The Executive Order bolsters an already rigorous array of privacy and civil liberties safeguards for U.S. signals intelligence activities. It also creates an independent and binding mechanism enabling individuals in qualifying states and regional economic integration organizations, as designated under the E.O., to seek redress if they believe their personal data was collected through U.S. signals intelligence in a manner that violated applicable U.S. law.

U.S. and EU companies large and small across all sectors of the economy rely upon cross-border data flows to participate in the digital economy and expand economic opportunities. The EU-U.S. DPF represents the culmination of a joint effort by the United States and the European Commission to restore trust and stability to transatlantic data flows and reflects the strength of the enduring EU-U.S. relationship based on our shared values…(More)”.

Can critical policy studies outsmart AI? Research agenda on artificial intelligence technologies and public policy


Paper by Regine Paul: “The insertion of artificial intelligence technologies (AITs) and data-driven automation in public policymaking should be a metaphorical wake-up call for critical policy analysts. Both its wide representation as techno-solutionist remedy in otherwise slow, inefficient, and biased public decision-making and its regulation as a matter of rational risk analysis are conceptually flawed and democratically problematic. To ‘outsmart’ AI, this article stimulates the articulation of a critical research agenda on AITs and public policy, outlining three interconnected lines of inquiry for future research: (1) interpretivist disclosure of the norms and values that shape perceptions and uses of AITs in public policy, (2) exploration of AITs in public policy as a contingent practice of complex human-machine interactions, and (3) emancipatory critique of how ‘smart’ governance projects and AIT regulation interact with (global) inequalities and power relations…(More)”.

Governing the Environment-Related Data Space


Stefaan G. Verhulst, Anthony Zacharzewski and Christian Hudson at Data & Policy: “Today, The GovLab and The Democratic Society published their report, “Governing the Environment-Related Data Space”, written by Jörn Fritzenkötter, Laura Hohoff, Paola Pierri, Stefaan G. Verhulst, Andrew Young, and Anthony Zacharzewski . The report captures the findings of their joint research centered on the responsible and effective reuse of environment-related data to achieve greater social and environmental impact.

Environment-related data (ERD) encompasses numerous kinds of data across a wide range of sectors. It can best be defined as data related to any element of the Driver-Pressure-State-Impact-Response (DPSIR) Framework. If leveraged effectively, this wealth of data could help society establish a sustainable economy, take action against climate change, and support environmental justice — as recognized recently by French President Emmanuel Macron and UN Secretary General’s Special Envoy for Climate Ambition and Solutions Michael R. Bloomberg when establishing the Climate Data Steering Committee.

While several actors are working to improve access to, as well as promote the (re)use of, ERD data, two key challenges that hamper progress on this front are data asymmetries and data enclosures. Data asymmetries occur due to the ever-increasing amounts of ERD scattered across diverse actors, with larger and more powerful stakeholders often maintaining unequal access. Asymmetries lead to problems with accessibility and findability (data enclosures), leading to limited sharing and collaboration, and stunting the ability to use data and maximize its potential to address public ills.

The risks and costs of data enclosure and data asymmetries are high. Information bottlenecks cause resources to be misallocated, slow scientific progress, and limit our understanding of the environment.

A fit-for-purpose governance framework could offer a solution to these barriers by creating space for more systematic, sustainable, and responsible data sharing and collaboration. Better data sharing can in turn ease information flows, mitigate asymmetries, and minimize data enclosures.

And there are some clear criteria for an effective governance framework…(More)”

How one group of ‘fellas’ is winning the meme war in support of Ukraine


Article by Suzanne Smalley: “The North Atlantic Fella Organization, or NAFO, has arrived.

Ukraine’s Defense Ministry celebrated the group on Twitter for waging a “fierce fight” against Kremlin trolls. And Rep. Adam Kinzinger, D-Ill., tweeted that he was “self-declaring as a proud member of #NAFO” and “the #fellas shall prevail.”

The brainchild of former Marine Matt Moores, NAFO launched in May and quickly blew up on Twitter. It’s become something of a movement, drawing support in military and cybersecurity circles who circulate its meme backing Ukraine in its war against Russia.

“The power of what we’re doing is that instead of trying to come in and point-by-point refute, and argue about what’s true and what isn’t, it’s coming and saying, ‘Hey, that’s dumb,’” Moores said during a panel on Wednesday at the Center for International and Strategic Studies in Washington. “And the moment somebody’s replying to a cartoon dog online, you’ve lost if you work for the government of Russia.”

Memes have figured heavily in the information war following the Russian invasion. The Ukrainian government has proven eager to highlight memes on agency websites and officials have been known to personally thank online communities that spread anti-Russian memes. The NAFO meme shared by the defense ministry in August showed a Shiba Inu dog in a military uniform appearing to celebrate a missile launch.

The Shiba Inu has long been a motif in internet culture. According to Vice’s Motherboard, the use of Shiba Inu to represent a “fella” waging online war against the Russians dates to at least May when an artist started rewarding fellas who donated money to the Georgian Legion by creating customized fella art for online use…(More)”.

Policy evaluation in times of crisis: key issues and the way forward


OECD Paper: “This paper provides an overview of the challenges policy evaluators faced in the context of COVID19, both due to pandemic-specific hurdles and resource constraints within governments. Then, the paper provides an overview of OECD governments evaluation practices during COVID-19, with a specific
emphasis on the actors, the aims and the methods involved. Finally, the third and final section sets out lessons for future policy evaluations in light of advances made during the period, both for evaluating crisis responses, as well as for the evaluation field in general…(More)”.

AI Audit Washing and Accountability


Paper by Ellen P. Goodman and Julia Trehu: “Algorithmic decision systems, many using artificial intelligence, are reshaping the provision of private and public services across the globe. There is an urgent need for algorithmic governance. Jurisdictions are adopting or considering mandatory audits of these systems to assess compliance with legal and ethical standards or to provide assurance that the systems work as advertised. The hope is that audits will make public agencies and private firms accountable for the harms their algorithmic systems may cause, and thereby lead to harm reductions and more ethical tech. This hope will not be realized so long as the existing ambiguity around the term “audit” persists, and until audit standards are adequate and well-understood. The tacit expectation that algorithmic audits will function like established financial audits or newer human rights audits is fanciful at this stage. In the European Union, where algorithmic audit requirements are most advanced, to the United States, where they are nascent, core questions need to be addressed for audits to become reliable AI accountability mechanisms. In the absence of greater specification and more independent auditors, the risk is that AI auditing becomes AI audit washing. This paper first reports on proposed and enacted transatlantic AI or algorithmic audit provisions. It then draws on the technical, legal, and sociotechnical literature to address the who, what, why, and how of algorithmic audits, contributing to the literature advancing algorithmic governance…(More)“.