Computing the Climate: How We Know What We Know About Climate Change


Book by Steve M. Easterbrook: “How do we know that climate change is an emergency? How did the scientific community reach this conclusion all but unanimously, and what tools did they use to do it? This book tells the story of climate models, tracing their history from nineteenth-century calculations on the effects of greenhouse gases, to modern Earth system models that integrate the atmosphere, the oceans, and the land using the full resources of today’s most powerful supercomputers. Drawing on the author’s extensive visits to the world’s top climate research labs, this accessible, non-technical book shows how computer models help to build a more complete picture of Earth’s climate system. ‘Computing the Climate’ is ideal for anyone who has wondered where the projections of future climate change come from – and why we should believe them…(More)”.

Wastewater monitoring: ‘the James Webb Telescope for population health’


Article by Exemplars News: “When the COVID-19 pandemic triggered a lockdown across Bangladesh and her research on environmental exposure to heavy metals became impossible to continue, Dr. Rehnuma Haque began a search for some way she could contribute to the pandemic response.

“I knew I had to do something during COVID,” said Dr. Haque, a research scientist at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b). “I couldn’t just sit at home.”

Then she stumbled upon articles on early wastewater monitoring efforts for COVID in Australia, the NetherlandsItaly, and the United States. “When I read those papers, I was so excited,” said Dr. Haque. “I emailed my supervisor, Dr. Mahbubur Rahman, and said, ‘Can we do this?’”

Two months later, in June 2020, Dr. Haque and her colleagues had launched one of the most robust and earliest national wastewater surveillance programs for COVID in a low- or middle-income country (LMIC).

The initiative, which has now been expanded to monitor for cholera, salmonella, and rotavirus and may soon be expanded further to monitor for norovirus and antibiotic resistance, demonstrates the power and potential of wastewater surveillance to serve as a low-cost tool for obtaining real-time meaningful health data at scale to identify emerging risks and guide public health responses.

“It is improving public health outcomes,” said Dr. Haque. “We can see everything going on in the community through wastewater surveillance. You can find everything you are looking for and then prepare a response.”

A single wastewater sample can yield representative data about an entire ward, town, or county and allow LMICs to monitor for emerging pathogens. Compared with clinical monitoring, wastewater monitoring is easier and cheaper to collect, can capture infections that are asymptomatic or before symptoms arise, raises fewer ethical concerns, can be more inclusive and not as prone to sampling biases, can generate a broader range of data, and is unrivaled at quickly generating population-level data…(More)” – See also: The #Data4Covid19 Review

The danger of building strong narratives on weak data


Article by John Burn-Murdoch: “Measuring gross domestic product is extremely complicated. Around the world, national statistics offices are struggling to get the sums right the first time around.

Some struggle more than others. When Ireland first reported its estimate for GDP growth in Q1 2015, it came in at 1.4 per cent. One year later, and with some fairly unique distortions due to its location as headquarters for many US big tech and pharma companies, this was revised upwards to an eye-watering 21.4 per cent.

On average, five years after an estimate of quarterly Irish GDP growth is first published, the latest revision of that figure is two full percentage points off the original value. The equivalent for the UK is almost 10 times smaller at 0.25 percentage points, making the ONS’s initial estimates among the most accurate in the developed world, narrowly ahead of the US at 0.26 and well ahead of the likes of Japan (0.46) and Norway (0.56).

But it’s not just the size of revisions that matters, it’s the direction. Out of 24 developed countries that consistently report quarterly GDP revisions to the OECD, the UK’s initial estimates are the most pessimistic. Britain’s quarterly growth figures typically end up 0.15 percentage points higher than first thought. The Germans go up by 0.07 on average, the French by 0.04, while the Americans, ever optimistic, typically end up revising their estimates down by 0.11 percentage points.

In other words, next time you hear a set of quarterly growth figures, it wouldn’t be unreasonable to mentally add 0.15 to the UK one and subtract 0.11 from the US.

This may all sound like nerdy detail, but it matters because people graft strong narratives on to this remarkably flimsy data. Britain was the only G7 economy yet to rebound past pre-Covid levels until it wasn’tIreland is booming, apparently, except its actual individual consumption per capita — a much better measure of living standards than GDP — has fallen steadily from just above the western European average in 2007 to 10 per cent below last year.

And the phenomenon is not exclusive to economic data. Two years ago, progressives critical of the government’s handling of the pandemic took to calling the UK “Plague Island”, citing Britain’s reported Covid death rates, which were among the highest in the developed world. But with the benefit of hindsight, we know that Britain was simply better at counting its deaths than most countries…(More)”

Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models


Paper by Pengfei Li, Jianyi Yang, Mohammad A. Islam, Shaolei Ren: “The growing carbon footprint of artificial intelligence (AI) models, especially large ones such as GPT-3 and GPT-4, has been undergoing public scrutiny. Unfortunately, however, the equally important and enormous water footprint of AI models has remained under the radar. For example, training GPT-3 in Microsoft’s state-of-the-art U.S. data centers can directly consume 700,000 liters of clean freshwater (enough for producing 370 BMW cars or 320 Tesla electric vehicles) and the water consumption would have been tripled if training were done in Microsoft’s Asian data centers, but such information has been kept as a secret. This is extremely concerning, as freshwater scarcity has become one of the most pressing challenges shared by all of us in the wake of the rapidly growing population, depleting water resources, and aging water infrastructures. To respond to the global water challenges, AI models can, and also should, take social responsibility and lead by example by addressing their own water footprint. In this paper, we provide a principled methodology to estimate fine-grained water footprint of AI models, and also discuss the unique spatial-temporal diversities of AI models’ runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI…(More)”.

Valuing Data: Where Are We, and Where Do We Go Next?


Article by Tim Sargent and Laura Denniston: “The importance of data as a driver of technological advancement cannot be underestimated, but how can it be measured? This paper looks at measuring the value of data in national accounts using three different categories of data-related assets: data itself, databases and data science. The focus then turns to three recent studies by statistical agencies in Canada, the Netherlands and the United States to examine how each country uses a cost-based analysis to value data-related assets. Although there are two other superior ways of valuing data (the income-based method and the market-based method, as well as a hybrid approach), the authors find that these methods will be difficult to implement. The paper concludes with recommendations that include widening data-valuation efforts to the public sector, which is a major holder of data. The social value of data also needs to be calculated by considering both the positive and negative aspects of data-related investment and use. Appropriate data governance strategies are needed to ensure that data is being used for everyone’s benefit…(More)”.

Mapping the landscape of data intermediaries


Report by the European Commission’s Joint Research Centre: “…provides a landscape analysis of key emerging types of data intermediaries. It reviews and syntheses current academic and policy literature, with the goal of identifying shared elements and definitions. An overall objective is to contribute to establishing a common vocabulary among EU policy makers, experts, and practitioners. Six types are presented in detail: personal information management systems (PIMS), data cooperatives, data trusts, data unions, data marketplaces, and data sharing pools. For each one, the report provides information about how it works, its main features, key examples, and business model considerations. The report is grounded in multiple perspectives from sociological, legal, and economic disciplines. The analysis is informed by the notion of inclusive data governance, contextualised in the recent EU Data Governance Act, and problematised according to the economic literature on business models.

The findings highlight the fragmentation and heterogeneity of the field. Data intermediaries range from individualistic and business-oriented types to more collective and inclusive models that support greater engagement in data governance, while certain types do aim at facilitating economic transactions between data holders and users, others mainly seek to produce collective benefits or public value. In the conclusions, it derives a series of take-aways regarding main obstacles faced by data intermediaries and identifies lines of empirical work in this field…(More)”.

AI could choke on its own exhaust as it fills the web


Article by Ina Fried and Scott Rosenberg: “Scott RosenbergThe internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves.

What’s happening: Experts estimate that AI-generated content could account for as much as 90% of information on the internet in a few years’ time, as ChatGPT, Dall-E and similar programs spill torrents of verbiage and images into online spaces.

  • That’s happening in a world that hasn’t yet figured out how to reliably label AI-generated output and differentiate it from human-created content.

The danger to human society is the now-familiar problem of information overload and degradation.

  • AI turbocharges the ability to create mountains of new content while it undermines the ability to check that material for reliability and recycles biases and errors in the data that was used to train it.
  • There’s also widespread fear that AI could undermine the jobs of people who create content today, from artists and performers to journalists, editors and publishers. The current strike by Hollywood actors and writers underlines this risk.

The danger to AI itself is newer and stranger. A raft of recent research papers have introduced a novel lexicon of potential AI disorders that are just coming into view as the technology is more widely deployed and used.

  • Model collapse” is researchers’ name for what happens to generative AI models, like OpenAI’s GPT-3 and GPT-4, when they’re trained using data produced by other AIs rather than human beings.
  • Feed a model enough of this “synthetic” data, and the quality of the AI’s answers can rapidly deteriorate, as the systems lock in on the most probable word choices and discard the “tail” choices that keep their output interesting.
  • Model Autophagy Disorder, or MAD, is how one set of researchers at Rice and Stanford universities dubbed the result of AI consuming its own products.
  • “Habsburg AI” is what another researcher earlier this year labeled the phenomenon, likening it to inbreeding: “A system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.”…(More)”.

Toward Bridging the Data Divide


Blog by Randeep Sudan, Craig Hammer, and Yaroslav Eferin: “Developing countries face a data conundrum. Despite more data being available than ever in the world, low- and middle-income countries often lack adequate access to valuable data and struggle to fully use the data they have.

This seemingly paradoxical situation represents a data divide. The terms “digital divide” and “data divide” are often used interchangeably but differ. The digital divide is the gap between those with access to digital technologies and those without access. On the other hand, the data divide is the gap between those who have access to high-quality data and those who do not. The data divide can negatively skew development across countries and therefore is a serious issue that needs to be addressed…

The effects of the data divide are alarming, with low- and middle-income countries getting left behind. McKinsey estimates that 75% of the value that could be created through Generative AI (such as ChatGPT) would be in four areas of economic activity: customer operations, marketing and sales, software engineering, and research and development. They further estimate that Generative AI  could add between $2.6 trillion and $4.4 trillion in value in these four areas.

PWC estimates that approximately 70% of all economic value generated by AI will likely accrue to just two countries: the USA and China. These two countries account for nearly two-thirds of the world’s hyperscale data centers, high rates of 5G adoption, the highest number of AI researchers, and the most funding for AI startups. This situation creates serious concerns for growing global disparities in accessing benefits from data collection and processing, and the related generation of insights and opportunities. These disparities will only increase over time without deliberate efforts to counteract this imbalance…(More)”

The Coming Wave


Book by Mustafa Suleyman and Michael Bhaskar: “Soon you will live surrounded by AIs. They will organise your life, operate your business, and run core government services. You will live in a world of DNA printers and quantum computers, engineered pathogens and autonomous weapons, robot assistants and abundant energy.

None of us are prepared.

As co-founder of the pioneering AI company DeepMind, part of Google, Mustafa Suleyman has been at the centre of this revolution. The coming decade, he argues, will be defined by this wave of powerful, fast-proliferating new technologies.

In The Coming Wave, Suleyman shows how these forces will create immense prosperity but also threaten the nation-state, the foundation of global order. As our fragile governments sleepwalk into disaster, we face an existential dilemma: unprecedented harms on one side and the threat of overbearing surveillance on the other…(More)”.

Regulation of Artificial Intelligence Around the World


Report by the Law Library of Congress: “…provides a list of jurisdictions in the world where legislation that specifically refers to artificial intelligence (AI) or systems utilizing AI have been adopted or proposed. Researchers of the Law Library surveyed all jurisdictions in their research portfolios to find such legislation, and those encountered have been compiled in the annexed list with citations and brief descriptions of the relevant legislation. Only adopted or proposed instruments that have legal effect are reported for national and subnational jurisdictions and the European Union (EU); guidance or policy documents that have no legal effect are not included for these jurisdictions. Major international organizations have also been surveyed and documents adopted or proposed by these organizations that specifically refer to AI are reported in the list…(More)”.