Poor data on groundwater jeopardizes climate resilience


Rebecca Root at Devex: “A lack of data on groundwater is impeding water management and could jeopardize climate resilience efforts in some places, according to recent research by WaterAid and the HSBC Water Programme.

Groundwater is found underground in gaps between soil, sand, and rock. Over 2.5 million people are thought to depend on groundwater — which has a higher tolerance to droughts than other water sources — for drinking.

The report looked at groundwater security and sustainability in Bangladesh, Ghana, India, Nepal, and Nigeria, where collectively more than 160 million people lack access to clean water close to home. It found that groundwater data tends to be limited — including on issues such as overextraction, pollution, and contamination — leaving little evidence for decision-makers to consider for its management.

“There’s a general lack of information and data … which makes it very hard to manage the resource sustainably,” said Vincent Casey, senior water, sanitation, and hygiene manager for waste at WaterAid…(More)”.

Data Sharing 2.0: New Data Sharing, New Value Creation


MIT CISR research:”…has found that interorganizational data sharing is a top concern of companies; leaders often find data sharing costly, slow, and risky. Interorganizational data sharing, however, is requisite for new value creation in the digital economy. Digital opportunities require data sharing 2.0: cross-company sharing of complementary data assets and capabilities, which fills data gaps and allows companies, often collaboratively, to develop innovative solutions. This briefing introduces three sets of practices—curated content, designated channels, and repeatable controls—that help companies accelerate data sharing 2.0….(More)”.

Common Pitfalls in the Interpretation of COVID-19 Data and Statistics


Paper by Andreas Backhaus: “…In the public debate, one can encounter at least three concepts that measure the deadliness of SARS-CoV-2: the case fatality rate (CFR), the infection fatality rate (IFR) and the mortality rate (MR). Unfortunately, these three concepts are sometimes used interchangeably, which creates confusion as they differ from each other by definition.

In its simplest form, the case fatality rate divides the total number of confirmed deaths by COVID-19 by the total number of confirmed cases of infections with SARS-CoV-2, neglecting adjustments for future deaths among current cases here. However, the number of confirmed cases is believed to severely underestimate the true number of infections. This is due to the asymptomatic process of the infection in many individuals and the lack of testing capacities. Hence, the CFR presumably reflects rather an upper bound to the true lethality of SARS-CoV-2, as its denominator does not take the undetected infections into account.

The infection fatality rate seeks to represent the lethality more accurately by incorporating the number of undetected infections or at least an estimate thereof into its calculation. Consequently, the IFR divides the total number of confirmed deaths by COVID-19 by the total number of infections with SARS-CoV-2. Due to its larger denominator but identical numerator, the IFR is lower than the CFR. The IFR represents a crucial parameter in epidemiological simulation models, such as that presented by Ferguson et al. (2020), as it determines the number of expected fatalities given the simulated spread of the disease among the population.

The methodological challenge regarding the IFR is, of course, to find a credible estimate of the undetected cases of infection. An early estimate of the IFR was provided on the basis of data collected in the course of the SARS-CoV-2 outbreak on the Diamond Princess cruise ship in February 2020. Mizumoto et al. (2020) estimate that 17.9% (95% confidence interval: 15.5-20.2) of the cases were asymptomatic. Russell et al. (2020), after adjusting for age, estimate that the IFR among the Diamond Princess cases is 1.3% (95% confidence interval: 0.38-3.6) when considering all cases, but 6.4% (95% confidence interval: 2.6–13) when considering only cases of patients that are 70 years and older. The serological studies that are currently being conducted in several countries and localities serve to provide more estimates of the true number of infections with SARS-CoV-2 that have occurred over the past few months….(More)”.

Digital Minilateralism: How governments cooperate on digital governance


Report from the Digital State Project: “…argues for the critical function of small, agile, digitally enabled and focused networks of leaders to foster strong international cooperation on digital governance issues.

This type of cooperative working, described as ‘digital minilateralism’, has a role to play in shaping how individual governments learn, adopt and govern the use of new and emerging technologies, and how they create common or aligned policies. It is also important as cross-border digital infrastructure and services become increasingly common.

The policy paper, co-authored by Dr. Tanya Filer, who leads the Digital State project, and Dr. Antonio Weiss, affiliated researcher, draws on the example of the Digital Nations, a network of 10 ‘leading digital’ countries, to advance understanding of how digital leaders and policymakers can best develop and use minilateral networks, and of the particular affordances that this approach offers.

Key findings: 

  • Already beginning to prove effective, digital minilateralism has a role to play in shaping how individual governments learn, adopt and govern the use of new and emerging technologies, and how they create common or aligned policy.
  • National governments should recognise and reinforce the strategic value of digital minilaterals without stamping out, through over-bureaucratisation, the qualities of trust, open conversation, and ad-hocness in which their value lies.
  • As digital minilateral networks grow and mature, they will need to find mechanisms through which to retain (or adapt) their core principles while scaling across more boundaries.
  • To demonstrate their value to the global community, digital multilaterals must feed into formal multilateral conversations and arrangements….(More)”.

Responsible group data for children


Issue Brief by Andrew Young: “Understanding how and why group data is collected and what can be done to protect children’s rights…While the data protection field largely focuses on individual data harms, it is a focus that obfuscates and exacerbates the risks of data that could put groups of people at risk, such as the residents of a particular village, rather than individuals.

Though not well-represented in the current responsible data literature and policy domains writ large, the challenges group data poses are immense. Moreover, the unique and amplified group data risks facing children are even less scrutinized and understood.

To achieve Responsible Data for Children (RD4C) and ensure effective and legitimate governance of children’s data, government policymakers, data practitioners, and institutional decision makers need to ensure children’s group data are a core consideration in all relevant policies, procedures, and practices….(More)”. (See also Responsible Data for Children).

The Pandemic’s Digital Shadow


Freedom House: “The coronavirus pandemic is accelerating a dramatic decline in global internet freedom. For the 10th consecutive year, users have experienced an overall deterioration in their rights, and the phenomenon is contributing to a broader crisis for democracy worldwide.

In the COVID-19 era, connectivity is not a convenience, but a necessity. Virtually all human activities—commerce, education, health care, politics, socializing—seem to have moved online. But the digital world presents distinct challenges for human rights and democratic governance. State and nonstate actors in many countries are now exploiting opportunities created by the pandemic to shape online narratives, censor critical speech, and build new technological systems of social control.

Three notable trends punctuated an especially dismal year for internet freedom. First, political leaders used the pandemic as a pretext to limit access to information. Authorities often blocked independent news sites and arrested individuals on spurious charges of spreading false news. In many places, it was state officials and their zealous supporters who actually disseminated false and misleading information with the aim of drowning out accurate content, distracting the public from ineffective policy responses, and scapegoating certain ethnic and religious communities. Some states shut off connectivity for marginalized groups, extending and deepening existing digital divides. In short, governments around the world failed in their obligation to promote a vibrant and reliable online public sphere.

Second, authorities cited COVID-19 to justify expanded surveillance powers and the deployment of new technologies that were once seen as too intrusive. The public health crisis has created an opening for the digitization, collection, and analysis of people’s most intimate data without adequate protections against abuses. Governments and private entities are ramping up their use of artificial intelligence (AI), biometric surveillance, and big-data tools to make decisions that affect individuals’ economic, social, and political rights. Crucially, the processes involved have often lacked transparency, independent oversight, and avenues for redress. These practices raise the prospect of a dystopian future in which private companies, security agencies, and cybercriminals enjoy easy access not only to sensitive information about the places we visit and the items we purchase, but also to our medical histories, facial and voice patterns, and even our genetic codes.

The third trend has been the transformation of a slow-motion “splintering” of the internet into an all-out race toward “cyber sovereignty,” with each government imposing its own internet regulations in a manner that restricts the flow of information across national borders. For most of the period since the internet’s inception, business, civil society, and government stakeholders have participated in a consensus-driven process to harmonize technical protocols, security standards, and commercial regulation around the world. This approach allowed for the connection of billions of people to a global network of information and services, with immeasurable benefits for human development, including new ways to hold powerful actors to account….(More)

Cracking the code: Rulemaking for humans and machines


OECD Paper by James Mohun and Alex Roberts: “Rules as Code (RaC) is an exciting concept that rethinks one of the core functions of governments: rulemaking. It proposes that governments create an official version of rules (e.g. laws and regulations) in a machine-consumable form, which allows rules to be understood and actioned by computer systems in a consistent way. More than simply a technocratic solution, RaC represents a transformational shift in how governments create rules, and how third parties consume them. Across the world, public sector teams are exploring the concept and its potential as a response to an increasingly complex operating environment and growing pressures on incumbent rulemaking systems. Cracking the Code is intended to help those working both within and outside of government to understand the potential, limitations and implications of RaC, as well as how it could be applied in a public service context….(More)”.

Catastrophes of the 21st Century


Paper by Roger Pielke: “There are few ways to better display our ignorance than by speculating on the long-term future. At the same time, making wise decisions depends upon both anticipating an uncertain future and the limits of what we can know. This paper takes a broad look at global trends in place today, where they may be taking us, and the implications for thinking about catastrophes of the 21st century. I suggest three types of catastrophes lie ahead. The familiar – hazards that we have come to expect based on experience and knowledge, such as earthquakes and typhoons. The emergent – hazards that are the product of a complex, interconnected world, such as financial meltdowns, supply chain disruption and epidemics.

The extraordinary – hazards that may or may not be foreseen or foreseeable, but for which we are wholly unprepared, such as an asteroid impact, massive solar storm, or even fantastic scenarios found only in fiction, such as the consequences of contact with alien life. I will argue that our collective attention and expertise is, perhaps understandably, disproportionately focused on the familiar. The consequence, however, is a sort of intellectual myopia. We know more than we think about the familiar and less than we should about the emergent and the extraordinary. Yet our ability to deal with the hazards of the future likely depends much more on our ability to prepare for the emergent and the extraordinary. The talk will conclude with recommendations for what a robust and resilient global society might look like in the face of known, unknown and unknowable risks of the 21st century….(More)”.

Regulatory Technology


Paper by the Productivity Commission (Australia): ” Regulatory technology (‘regtech’) is the use of technology to better achieve regulatory objectives. Used well, it can support the improved targeting of regulation and reduce the costs of administration and compliance.

While regtech can improve regulatory outcomes and reduce costs, it is not a substitute for regulatory reform. Indeed, as regtech is intended to make the task of regulating easier, advances in technology heighten the onus on policy makers to ensure the need for, and design of, regulation are soundly‑based….

Leading‑edge regtech involves the use of data for predictive analytics and real time monitoring, enabling better regulatory outcomes and potentially fewer compliance burdens for businesses. But advanced regtech requires specialised resources and long development times.

Even in low‑tech applications, widespread implementation of regtech can take some years. It can require substantial investment by regulators and businesses in capacity and cultural change while (as with technology solutions generally) enumeration of the scale and timing of the benefits can be difficult.

There are four key areas where regtech solutions may be particularly beneficial:

  • where regulatory environments are particularly complex to navigate and monitor
  • where there is scope to improve risk‑based regulatory approaches, thereby targeting the compliance burden and regulator efforts
  • where technology can enable better monitoring, including by overcoming constraints related to physical presence
  • where technology can safely unlock more uses of data for regulatory compliance.

Creating and maintaining a regulatory environment that supports the realisation of regtech benefits would mean:

  • improving the consistency and structure of data and the interoperability of, and standards for, technology — these are precursors to wider regtech adoption
  • investing in the technical skills and capabilities of regulators to enable measured steps in regtech adoption
  • determining accountability for outcomes associated with regtech solutions, including with regard to privacy, data security, and responsibility for resolving disputed outcomes
  • reviewing regulation to remove technology‑specific requirements that could prevent the take‑up of beneficial regtech solutions
  • creating familiarity with the possibilities of regtech (for example, through liaison forums and trials), facilitating collaboration between regulators, regulated entities and regtech developers, and establishing safe environments to develop and test regtech solutions….(More)”.

An exploration of Augmented Collective Intelligence


Dark Matter Laboratories: “…As with all so-called wicked problems, the climate crisis occurs at the intersection of human and natural systems, where interdependent components interact at multiple scales causing uncertainty and emergent, erratic fluctuations. Interventions in such systems can trigger disproportionate impacts in other areas due to feedback effects. On top of this, collective action problems, such as identifying and implementing climate crisis adaptation or mitigation strategies, involve trade-offs and conflicting motivations between the different decision-makers. All of this presents challenges when identifying solutions, or even agreeing on a shared definition of the problem.

As is often the case in times of crisis, collective community-led actions have been a vital part of the response to the COVID-19 pandemic. Communities have demonstrated their capacity to mobilise efficiently in areas where the public sector has been either too slow, unable, or unwilling to intervene. Yet, the pandemic has also put into perspective the scale of response required to address the climate crisis. Despite a near-total shutdown of the global economy, annual CO2 emissions are only expected to fall by 5.6% this year, falling short of the 7.6% target required to ensure a temperature rise of no more than 1.5°C. Can AI help amplify and coordinate collective action to the scale necessary for effective climate crisis response? In this post, we explore alternative futures that leverage the significant potential of citizen groups to act at a local level in order to achieve global impact.

Applying AI to climate problems

There are various research collaborations, open challenges, and corporate-led initiatives that already exist in the field of AI and climate crisis. Climate Change AI, for instance, has identified a range of opportunity domains for a selection of machine learning (ML) methods. These applications range from electrical systems and transportation to collective decisions and education. Google.org’s Impact Challenge supports initiatives applying AI for social good, while the AI for Good platform aims to identify practical applications of AI that can be scaled for global impact. These initiatives and many others, such as Project Drawdown, have informed our research into opportunity areas for AI to augment Collective Intelligence.

Throughout the project, we have been wary that attempts to apply AI to complex problems can suffer from technological solutionism, which loses sight of the underlying issues. To try to avoid this, with Civic AI, we have focused on understanding community challenges before identifying which parts of the problem are most suited to AI’s strengths, especially as this is just one of the many tools available. Below, we explore how AI could be used to complement and enhance community-led efforts as part of inclusive civic infrastructures.

We define civic assets as the essential shared infrastructure that benefits communities such as an urban forest or a community library. We will explore their role in climate crisis mitigation and adaptation. What does a future look like in which these assets are semi-autonomous and highly participatory, fostering collaboration between people and machines?…(More) –

See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern

Image for post
ACI Framework — Download pdf