Poor data on groundwater jeopardizes climate resilience


Rebecca Root at Devex: “A lack of data on groundwater is impeding water management and could jeopardize climate resilience efforts in some places, according to recent research by WaterAid and the HSBC Water Programme.

Groundwater is found underground in gaps between soil, sand, and rock. Over 2.5 million people are thought to depend on groundwater — which has a higher tolerance to droughts than other water sources — for drinking.

The report looked at groundwater security and sustainability in Bangladesh, Ghana, India, Nepal, and Nigeria, where collectively more than 160 million people lack access to clean water close to home. It found that groundwater data tends to be limited — including on issues such as overextraction, pollution, and contamination — leaving little evidence for decision-makers to consider for its management.

“There’s a general lack of information and data … which makes it very hard to manage the resource sustainably,” said Vincent Casey, senior water, sanitation, and hygiene manager for waste at WaterAid…(More)”.

High-Stakes AI Decisions Need to Be Automatically Audited


Oren Etzioni and Michael Li in Wired: “…To achieve increased transparency, we advocate for auditable AI, an AI system that is queried externally with hypothetical cases. Those hypothetical cases can be either synthetic or real—allowing automated, instantaneous, fine-grained interrogation of the model. It’s a straightforward way to monitor AI systems for signs of bias or brittleness: What happens if we change the gender of a defendant? What happens if the loan applicants reside in a historically minority neighborhood?

Auditable AI has several advantages over explainable AI. Having a neutral third-party investigate these questions is a far better check on bias than explanations controlled by the algorithm’s creator. Second, this means the producers of the software do not have to expose trade secrets of their proprietary systems and data sets. Thus, AI audits will likely face less resistance.

Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Say Netflix recommends The Twilight Zone because I watched Stranger Things. Will it also recommend other science fiction horror shows? Does it recommend The Twilight Zone to everyone who’s watched Stranger Things?

Early examples of auditable AI are already having a positive impact. The ACLU recently revealed that Amazon’s auditable facial-recognition algorithms were nearly twice as likely to misidentify people of color. There is growing evidence that public audits can improve model accuracy for under-represented groups.

In the future, we can envision a robust ecosystem of auditing systems that provide insights into AI. We can even imagine “AI guardians” that build external models of AI systems based on audits. Instead of requiring AI systems to provide low-fidelity explanations, regulators can insist that AI systems used for high-stakes decisions provide auditing interfaces.

Auditable AI is not a panacea. If an AI system is performing a cancer diagnostic, the patient will still want an accurate and understandable explanation, not just an audit. Such explanations are the subject of ongoing research and will hopefully be ready for commercial use in the near future. But in the meantime, auditable AI can increase transparency and combat bias….(More)”.

How Cape Town Used Behavioral Science to Beat Its Water Crisis


Article by Ammaarah Martinus and Faisal Naru: “In March 2018, the metropolitan government of Cape Town, on South Africa’s Western Cape, announced that it had avoided “Day Zero”—the day the dams supplying the city would have reached 13.5 percent capacity, the point at which the water supply to most of the city would be turned off. Earlier in the year, the city had been forecast to hit Day Zero on April 22, 2018.

Fortunately, it didn’t come to this. The city managed to develop a successful water savings campaign which stopped the taps from running dry in Cape Town. Had this not occurred, residents would have had faced severe restrictions on water use and their daily habits would have been upended. For instance, they would have had to visit water collection sites to service their basic needs. 

The city’s bold and comprehensive communication strategy around Day Zero, which focused on changing behaviors and implementing clever nudges, was a big part of the success story. Here’s how it unfolded….(More)

Timeline: Cape Town’s Water Crisis

Science Philanthropy and Societal Responsibility: A Match Made for the 21st Century


Blog by Evan S. Michelson: “The overlapping crises the world has experienced in 2020 make clear that resources from multiple sectors — government, private sector, and philanthropy — need to be deployed at multiple scales to better address societal challenges. In particular, science philanthropy has stepped up, helping to advance COVID-19 vaccine developmentidentify solutions to climate change, and make the tools of scientific inquiry more widely available.

As I write in my recently published book, Philanthropy and the Future of Science and Technology (Routledge, 2020), this linkage between science philanthropy and societal responsibility is one that needs to be continually strengthened and advanced as global challenges become more intertwined and as the relationship between science and society becomes more complex. In fact, science philanthropies have an important, yet often overlooked, role in raising the profile of the societal responsibility of research. One way to better understand the role science philanthropies can and should play in society is to draw on the responsible research and innovation (RRI) framework, a concept developed by scholars from fields such as science & technology policy and science & technology studies. Depending on its configuration, the RRI framework has roughly three core dimensions: anticipatory research that is forward-looking and in search of new discoveries, deliberative and inclusive approaches that better engage and integrate members of the public with the research process, and the adoption of reflexive and responsive dispositions by funders (along with those conducting research) to ensure that societal and public values are accounted for and integrated at the outset of a research effort.

Philanthropies that fund research can more explicitly consider this perspective — even just a little bit — when making their funding decisions, thereby helping to better infuse whatever support they provide for individuals, institutions, and networks with attention to broader societal concerns. For instance, doing so not only highlights the need for science philanthropies to identify and support high-quality early career researchers who are pursuing new avenues of science and technology research, but it also raises considerations of diversity, equity, and inclusion as equally important decision-making criteria for funding. The RRI framework also suggests that foundations working in science and technology should not only help to bring together networks of individual scholars and their host institutions, but that the horizon of such collaborations should be actively extended to include practitioners, decision-makers, users, and communities affected by such investigations. Philanthropies can take a further step and reflexively apply these perspectives to how they operate, how they set their strategies and grantmaking priorities, or even in how they directly manage scientific research infrastructure, which some philanthropic institutions have even begun to do within their own institutions….(More)”.

Introducing the Institute of Impossible Ideas


Blog by Dominic Campbell: “…We have an opportunity ahead of us to set up a new model which seeds and keeps innovation firmly in the public realm. Using entrepreneurial approaches, we can work together to not only deliver better outcomes for citizens for less but ideate, create and build technology-driven, sustainable services that remain in public hands.

Rebooting public services for the 21st century

Conventional wisdom is that the private sector is best placed to drive radical change with its ecosystem of funders, appetite for risk and perceived ability to attract the best and brightest minds. In the private sector, digital companies have disrupted whole industries. Tech startups are usurping the incumbents, improving experiences and reducing costs before expanding and completely transforming the landscape around them.

We’re talking about the likes of Netflix who started a new model for movie rentals, turned streaming platform for TV and is now one of the world’s largest producers of media. Or Airbnb, which got its start renting a spare room and air mattress, turned one of the largest travel booking platforms and is now moving into building physical hotels and housing. Two organisations who saw an opportunity in a market, and have gone on to reinvent a full-stack service.

The entrepreneurial approach has driven rapid innovation in some fields, but private sector outsourcing for the public realm has rarely led to truly radical innovation. That doesn’t stop the practice, and profits remain in private hands. Old models of innovation, either internal and incremental or left to the private sector, aren’t working.

The public sector can, and does, drive innovation. And yet, we continue to see private profits take off from the runway of publicly funded innovation, the state receiving little of the financial reward for the private sector’s increased role in public service delivery….(More)…Find out more about the Institute of Impossible Ideas.

An exploration of Augmented Collective Intelligence


Dark Matter Laboratories: “…As with all so-called wicked problems, the climate crisis occurs at the intersection of human and natural systems, where interdependent components interact at multiple scales causing uncertainty and emergent, erratic fluctuations. Interventions in such systems can trigger disproportionate impacts in other areas due to feedback effects. On top of this, collective action problems, such as identifying and implementing climate crisis adaptation or mitigation strategies, involve trade-offs and conflicting motivations between the different decision-makers. All of this presents challenges when identifying solutions, or even agreeing on a shared definition of the problem.

As is often the case in times of crisis, collective community-led actions have been a vital part of the response to the COVID-19 pandemic. Communities have demonstrated their capacity to mobilise efficiently in areas where the public sector has been either too slow, unable, or unwilling to intervene. Yet, the pandemic has also put into perspective the scale of response required to address the climate crisis. Despite a near-total shutdown of the global economy, annual CO2 emissions are only expected to fall by 5.6% this year, falling short of the 7.6% target required to ensure a temperature rise of no more than 1.5°C. Can AI help amplify and coordinate collective action to the scale necessary for effective climate crisis response? In this post, we explore alternative futures that leverage the significant potential of citizen groups to act at a local level in order to achieve global impact.

Applying AI to climate problems

There are various research collaborations, open challenges, and corporate-led initiatives that already exist in the field of AI and climate crisis. Climate Change AI, for instance, has identified a range of opportunity domains for a selection of machine learning (ML) methods. These applications range from electrical systems and transportation to collective decisions and education. Google.org’s Impact Challenge supports initiatives applying AI for social good, while the AI for Good platform aims to identify practical applications of AI that can be scaled for global impact. These initiatives and many others, such as Project Drawdown, have informed our research into opportunity areas for AI to augment Collective Intelligence.

Throughout the project, we have been wary that attempts to apply AI to complex problems can suffer from technological solutionism, which loses sight of the underlying issues. To try to avoid this, with Civic AI, we have focused on understanding community challenges before identifying which parts of the problem are most suited to AI’s strengths, especially as this is just one of the many tools available. Below, we explore how AI could be used to complement and enhance community-led efforts as part of inclusive civic infrastructures.

We define civic assets as the essential shared infrastructure that benefits communities such as an urban forest or a community library. We will explore their role in climate crisis mitigation and adaptation. What does a future look like in which these assets are semi-autonomous and highly participatory, fostering collaboration between people and machines?…(More) –

See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern

Image for post
ACI Framework — Download pdf

Amsterdam and Helsinki launch algorithm registries to bring transparency to public deployments of AI


Khari Johnson at Venture Beat: “Amsterdam and Helsinki today launched AI registries to detail how each city government uses algorithms to deliver services, some of the first major cities in the world to do so. An AI Register for each city was introduced in beta today as part of the Next Generation Internet Policy Summit, organized in part by the European Commission and the city of Amsterdam. The Amsterdam registry currently features a handful of algorithms, but it will be extended to include all algorithms following the collection of feedback at the virtual conference to lay out a European vision of the future of the internet, according to a city official.

Each algorithm cited in the registry lists datasets used to train a model, a description of how an algorithm is used, how humans utilize the prediction, and how algorithms were assessed for potential bias or risks. The registry also provides citizens a way to give feedback on algorithms their local government uses and the name, city department, and contact information for the person responsible for the responsible deployment of a particular algorithm. A complete algorithmic registry can empower citizens and give them a way to evaluate, examine, or question governments’ applications of AI.

In a previous development in the U.S., New York City created an automated decision systems task force in 2017 to document and assess city use of algorithms. At the time it was the first city in the U.S. to do so. However, following the release of a report last year, commissioners on the task force complained about a lack of transparency and inability to access information about algorithms used by city government agencies….

In a statement accompanying the announcement, Helsinki City Data project manager Pasi Rautio said the registry is also aimed at increasing public trust in the kinds of artificial intelligence “with the greatest possible openness.”…(More)”.

Private Sector Data for Humanitarian Response: Closing the Gaps


Jos Berens at Bloomberg New Economy Forum: “…Despite these and other examples, data sharing between the private sector and humanitarian agencies is still limited. Out of 281 contributing organizations on HDX, only a handful come from the private sector. 

So why don’t we see more use of private sector data in humanitarian response? One obvious set of challenges concerns privacy, data protection and ethics. Companies and their customers are often wary of data being used in ways not related to the original purpose of data collection. Such concerns are understandable, especially given the potential legal and reputational consequences of personal data breaches and leaks.

Figuring out how to use this type of sensitive data in an already volatile setting seems problematic, and it is — negotiations between public and private partners in the middle of a crisis often get hung up on a lack of mutual understanding. Data sharing partnerships negotiated during emergencies often fail to mature beyond the design phase. This dynamic creates a loop of inaction due to a lack of urgency in between crises, followed by slow and halfway efforts when action is needed most.

To ensure that private sector data is accessible in an emergency, humanitarian organizations and private sector companies need to work together to build partnerships before a crisis. They can do this by taking the following actions: 

  • Invest in relationships and build trust. Both humanitarian organizations and private sector organizations should designate focal points who can quickly identify potentially useful data during a humanitarian emergency. A data stewards network which identifies and connects data responsibility leaders across organizations, as proposed by the NYU Govlab, is a great example of how such relations could look. Efforts to build trust with the general public regarding private sector data use for humanitarian response should also be strengthened, primarily through transparency about the means and purpose of such collaborations. This is particularly important in the context of COVID-19, as noted in the UN Comprehensive Response to COVID-19 and the World Economic Forum’s ‘Great Reset’ initiative…(More)”.

Why Coming Up With Effective Interventions To Address COVID-19 Is So Hard


Article by Neil Lewis Jr.: “It has been hard to measure the effects of the novel coronavirus. Not only is COVID-19 far-reaching — it’s touched nearly every corner of the globe at this point — but its toll on society has also been devastating. It is responsible for the deaths of over 905,000 people around the world, and more than 190,000 people in the United States alone. The associated economic fallout has been crippling. In the U.S., more people lost their jobs in the first three months of the pandemic than in the first two years of the Great Recession. Yes, there are some signs the economy might be recovering, but the truth is, we’re just beginning to understand the pandemic’s full impact, and we don’t yet know what the virus has in store for us.

This is all complicated by the fact that we’re still figuring out how best to combat the pandemic. Without a vaccine readily available, it has been challenging to get people to engage in enough of the behaviors that can help slow the virus. Some policy makers have turned to social and behavioral scientists for guidance, which is encouraging because this doesn’t always happen. We’ve seen many universities ignore the warnings of behavioral scientists and reopen their campuses, only to have to quickly shut them back down.

But this has also meant that there are a lot of new studies to wade through. In the field of psychology alone, between Feb. 10 and Aug. 30, 541 papers about COVID-19 were uploaded to the field’s primary preprint server, PsyArXiv. With so much research to wade through, it’s hard to know what to trust — and I say that as someone who makes a living researching what types of interventions motivate people to change their behaviors.

As I tell my students, if you want to use behavioral science research to address real-world problems, you have to look very closely at the details. Often, a simple question like, “What research should policy makers and practitioners use to help combat the pandemic?” is surprisingly difficult to answer.

For starters, there are often key differences between the lab (or the people and situations some social scientists typically study as part of our day-to-day research) and the real world (or the people and situations policy-makers and practitioners have in mind when crafting interventions).

Take, for example, the fact that social scientists tend to study people from richer countries that are generally highly educated, industrialized, democratic and in the Western hemisphere. And some social scientific fields (e.g., psychologyfocus overwhelmingly on whiter, wealthier and more highly educated groups of people within those nations.

This is a major issue in the social sciences and something that researchers have been talking about for decades. But it’s important to mention now, too, as Black and brown people have been disproportionately affected by the coronavirus — they are dying at much higher rates than white people and working more of the lower-paying “essential” jobs that expose them to greater risks. Here you can start to see very real research limitations creep in: The people whose lives have been most adversely affected by the virus have largely been excluded from the studies that are supposed to help them. When samples and the methods used are not representative of the real world, it becomes very difficult to reach accurate and actionable conclusions….(More)”.

How Algorithms Can Fight Bias Instead of Entrench It


Essay by Tobias Baer: “…How can we build algorithms that correct for biased data and that live up to the promise of equitable decision-making?

When we consider changing an algorithm to eliminate bias, it is helpful to distinguish what we can change at three different levels (from least to most technical): the decision algorithm, formula inputs, and the formula itself.

In discussing the levels, I will use a fictional example, involving Martians and Zeta Reticulans. I do this because picking a real-life example would, in fact, be stereotyping—I would perpetuate the very biases I try to fight by reiterating a simplified version of the world, and every time I state that a particular group of people is disadvantaged, I also can negatively affect the self-perception of people who consider themselves members of these groups. I do apologize if I unintentionally insult any Martians reading this article!

On the simplest and least technical level, we would adjust only the overall decision algorithm that takes one or more statistical formulas (typically to predict unknown outcomes such as academic success, recidivation, or marital bliss) as an input and applies rules to translate the predictions of these formulas into decisions (e.g., by comparing predictions with externally chosen cutoff values or contextually picking one prediction over another). Such rules can be adjusted without touching the statistical formulas themselves.

An example of such an intervention is called boxing. Imagine you have a score of astrological ability. The astrological ability score is a key criterion for shortlisting candidates for the Interplanetary Economic Forecasting Institute. You would have no objective reason to believe that Martians are any less apt at prognosticating white noise than Zeta Reticulans; however, due to racial prejudice in our galaxy, Martian children tend to get asked a lot less for their opinion and therefore have a lot less practice in gabbing than Zeta Reticulans, and as a result only one percent of Martian applicants achieve the minimum score required to be hired for the Interplanetary Economic Forecasting Institute as compared to three percent of Zeta Reticulans.

Boxing would posit that for hiring decisions to be neutral of race, for each race two percent of applicants should be eligible, and boxing would achieve it by calibrating different cut-off scores (i.e., different implied probabilities of astrological success) for Martians and Zeta Reticulans.

Another example of a level-one adjustment would be to use multiple rank-ordering scores and to admit everyone who achieves a high score on any one of them. This approach is particularly well suited if you have different methods of assessment at your disposal, but each method implies a particular bias against one or more subsegments. An example for a crude version of this approach is admissions to medical school in Germany, where routes include college grades, a qualitative assessment through an interview, and a waitlist….(More)”.