Governing the Environment-Related Data Space


Stefaan G. Verhulst, Anthony Zacharzewski and Christian Hudson at Data & Policy: “Today, The GovLab and The Democratic Society published their report, “Governing the Environment-Related Data Space”, written by Jörn Fritzenkötter, Laura Hohoff, Paola Pierri, Stefaan G. Verhulst, Andrew Young, and Anthony Zacharzewski . The report captures the findings of their joint research centered on the responsible and effective reuse of environment-related data to achieve greater social and environmental impact.

Environment-related data (ERD) encompasses numerous kinds of data across a wide range of sectors. It can best be defined as data related to any element of the Driver-Pressure-State-Impact-Response (DPSIR) Framework. If leveraged effectively, this wealth of data could help society establish a sustainable economy, take action against climate change, and support environmental justice — as recognized recently by French President Emmanuel Macron and UN Secretary General’s Special Envoy for Climate Ambition and Solutions Michael R. Bloomberg when establishing the Climate Data Steering Committee.

While several actors are working to improve access to, as well as promote the (re)use of, ERD data, two key challenges that hamper progress on this front are data asymmetries and data enclosures. Data asymmetries occur due to the ever-increasing amounts of ERD scattered across diverse actors, with larger and more powerful stakeholders often maintaining unequal access. Asymmetries lead to problems with accessibility and findability (data enclosures), leading to limited sharing and collaboration, and stunting the ability to use data and maximize its potential to address public ills.

The risks and costs of data enclosure and data asymmetries are high. Information bottlenecks cause resources to be misallocated, slow scientific progress, and limit our understanding of the environment.

A fit-for-purpose governance framework could offer a solution to these barriers by creating space for more systematic, sustainable, and responsible data sharing and collaboration. Better data sharing can in turn ease information flows, mitigate asymmetries, and minimize data enclosures.

And there are some clear criteria for an effective governance framework…(More)”

Is digital feedback useful in impact evaluations? It depends.


Article by Lois Aryee and Sara Flanagan: “Rigorous impact evaluations are essential to determining program effectiveness. Yet, they are often time-intensive and costly, and may fail to provide the rapid feedback necessary for informing real-time decision-making and course corrections along the way that maximize programmatic impact. Capturing feedback that’s both quick and valuable can be a delicate balance.

In an ongoing impact evaluation we are conducting in Ghana, a country where smoking rates among adolescent girls are increasing with alarming health implications, we have been evaluating a social marketing campaign’s effectiveness at changing girls’ behavior and reducing smoking prevalence with support from the Bill & Melinda Gates Foundation. Although we’ve been taking a traditional approach to this impact evaluation using a year-long, in-person panel survey, we were interested in using digital feedback as a means to collect more timely data on the program’s reach and impact. To do this, we explored several rapid digital feedback approaches including social media, text message, and Interactive Voice Response (IVR) surveys to determine their ability to provide quicker, more actionable insights into the girls’ awareness of, engagement with, and feelings about the campaign. 

Digital channels seemed promising given our young, urban population of interest; however, collecting feedback this way comes with considerable trade-offs. Digital feedback poses risks to both equity and quality, potentially reducing the population we’re able to reach and the value of the information we’re able to gather. The truth is that context matters, and tailored approaches are critical when collecting feedback, just as they are when designing programs. Below are three lessons to consider when adopting digital feedback mechanisms into your impact evaluation design. 

Lesson 1: A high number of mobile connections does not mean the target population has access to mobile phones. ..

Lesson 2: High literacy rates and “official” languages do not mean most people are able to read and write easily in a particular language...

Lesson 3: Gathering data on taboo topics may benefit from a personal touch. …(More)”.

How one group of ‘fellas’ is winning the meme war in support of Ukraine


Article by Suzanne Smalley: “The North Atlantic Fella Organization, or NAFO, has arrived.

Ukraine’s Defense Ministry celebrated the group on Twitter for waging a “fierce fight” against Kremlin trolls. And Rep. Adam Kinzinger, D-Ill., tweeted that he was “self-declaring as a proud member of #NAFO” and “the #fellas shall prevail.”

The brainchild of former Marine Matt Moores, NAFO launched in May and quickly blew up on Twitter. It’s become something of a movement, drawing support in military and cybersecurity circles who circulate its meme backing Ukraine in its war against Russia.

“The power of what we’re doing is that instead of trying to come in and point-by-point refute, and argue about what’s true and what isn’t, it’s coming and saying, ‘Hey, that’s dumb,’” Moores said during a panel on Wednesday at the Center for International and Strategic Studies in Washington. “And the moment somebody’s replying to a cartoon dog online, you’ve lost if you work for the government of Russia.”

Memes have figured heavily in the information war following the Russian invasion. The Ukrainian government has proven eager to highlight memes on agency websites and officials have been known to personally thank online communities that spread anti-Russian memes. The NAFO meme shared by the defense ministry in August showed a Shiba Inu dog in a military uniform appearing to celebrate a missile launch.

The Shiba Inu has long been a motif in internet culture. According to Vice’s Motherboard, the use of Shiba Inu to represent a “fella” waging online war against the Russians dates to at least May when an artist started rewarding fellas who donated money to the Georgian Legion by creating customized fella art for online use…(More)”.

AI & Cities: Risks, Applications and Governance


Report by UN Habitat: “Artificial intelligence is manifesting at an unprecedented rate in urban centers, often with significant risks and little oversight. Using AI technologies without the appropriate governance mechanisms and without adequate consideration of how they affect people’s human rights can have negative, even catastrophic, effects.

This report is part of UN-Habitat’s strategy for guiding local authorities in realizing a people-centered digital transformation process in their cities and settlements…(More)”.

Call it data liberation day: Patients can now access all their health records digitally  


Article by Casey Ross: “The American Revolution had July 4. The allies had D-Day. And now U.S. patients, held down for decades by information hoarders, can rally around a new turning point, October 6, 2022 — the day they got their health data back.

Under federal rules taking effect Thursday, health care organizations must give patients unfettered access to their full health records in digital format. No more long delays. No more fax machines. No more exorbitant charges for printed pages.

Just the data, please — now…The new federal rules — passed under the 21st Century Cures Act — are designed to shift the balance of power to ensure that patients can not only get their data, but also choose who else to share it with. It is the jumping-off point for a patient-mediated data economy that lets consumers in health care benefit from the fluidity they’ve had for decades in banking: they can move their information easily and electronically, and link their accounts to new services and software applications.

“To think that we actually have greater transparency about our personal finances than about our own health is quite an indictment,” said Isaac Kohane, a professor of biomedical informatics at Harvard Medical School. “This will go some distance toward reversing that.”

Even with the rules now in place, health data experts said change will not be fast or easy. Providers and other data holders — who have dug in their heels at every step  —  can still withhold information under certain exceptions. And many questions remain about protocols for sharing digital records, how to verify access rights, and even what it means to give patients all their data. Does that extend to every measurement in the ICU? Every log entry? Every email? And how will it all get standardized?…(More)”

Blueprint for an AI Bill of Rights


The White House: “…To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.

  • Safe and Effective Systems
  • Data Privacy
  • Notice and Explanation
  • Algorithmic Discrimination Protections
  • Human Alternatives, Consideration, and Fallback…(More)”.

When do Reminders work?


Paper by Kai Barron, Mette Trier Damgaard and Christina Gravert: “An extensive literature shows that reminders can successfully change behavior. Yet, there exists substantial unexplained heterogeneity in their effectiveness, both: (i) across studies, and (ii) across individuals within a particular study. This paper investigates when and why reminders work. We develop a theoretical model that highlights three key mechanisms through which reminders may operate. To test the predictions of the model, we run a nationwide field experiment on medical adherence with over 4000 pregnant women in South Africa and document several key results. First, we find an extremely strong baseline demand for reminders. This demand increases after exposure to reminders, suggesting that individuals learn how valuable they are for freeing up memory resources. Second, stated adherence is increased by pure reminders and reminders containing a moral suasion component, but interestingly, reminders containing health information reduce adherence in our setting. Using a structural model, we show that heterogeneity in memory costs (or, equivalently, annoyance costs) is crucial for explaining the observed behavior…(More)”.

The Participation Paradox


Book by  Luke Sinwell: “The last two decades have ushered in what has become known as a participatory revolution, with consultants, advisors, and non-profits called into communities, classrooms, and corporations alike to listen to ordinary people. With exclusively bureaucratic approaches no longer en vogue, authorities now opt for “open” forums for engagement.

In The Participation Paradox Luke Sinwell argues that amplifying the voices of the poor and dispossessed is often a quick fix incapable of delivering concrete and lasting change. The ideology of public consultation and grassroots democracy can be a smokescreen for a cost-effective means by which to implement top-down decisions. As participation has become mainstreamed by governments around the world, so have its radical roots become tamed by neoliberal forces that reinforce existing relationships of power. Drawing from oral testimonies and ethnographic research, Sinwell presents a case study of one of the poorest and most defiant Black informal settlements in Johannesburg, South Africa – Thembelihle, which consists of more than twenty thousand residents – highlighting the promises and pitfalls of participatory approaches to development.

Providing a critical lens for understanding grassroots democracy, The Participation Paradox foregrounds alternatives capable of reclaiming participation’s emancipatory potential…(More)”.

The EU wants to put companies on the hook for harmful AI


Article by Melissa Heikkilä: “The EU is creating new rules to make it easier to sue AI companies for harm. A bill unveiled this week, which is likely to become law in a couple of years, is part of Europe’s push to prevent AI developers from releasing dangerous systems. And while tech companies complain it could have a chilling effect on innovation, consumer activists say it doesn’t go far enough. 

Powerful AI technologies are increasingly shaping our lives, relationships, and societies, and their harms are well documented. Social media algorithms boost misinformation, facial recognition systems are often highly discriminatory, and predictive AI systems that are used to approve or reject loans can be less accurate for minorities.  

The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation…(More)”.

Smart cities: reviewing the debate about their ethical implications


Paper from Marta Ziosi, Benjamin Hewitt, Prathm Juneja, Mariarosaria Taddeo & Luciano Floridi: “This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: (1) network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; (2) post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities; (3) social inclusion, expressed in the aspects of citizen participation and inclusion, and inequality and discrimination; and (4) sustainability, with a specific focus on the environment as an element to protect but also as a strategic element for the future. Given the persisting disagreements around the definition of a smart city, the article identifies in these four dimensions a more stable reference framework within which ethical concerns can be clustered and discussed. Identifying these dimensions makes possible a review of the ethical implications of smart cities that is transversal to their different types and resilient towards the unsettled debate over their definition…(More)”.