People Have a Right to Climate Data


Article by Justin S. Mankin: “As a climate scientist documenting the multi-trillion-dollar price tag of the climate disasters shocking economies and destroying lives, I sometimes field requests from strategic consultantsfinancial investment analysts and reinsurers looking for climate data, analysis and computer code.

Often, they want to chat about my findings or have me draw out the implications for their businesses, like the time a risk analyst from BlackRock, the world’s largest asset manager, asked me to help with research on what the current El Niño, a cyclical climate pattern, means for financial markets.

These requests make sense: People and companies want to adapt to the climate risks they face from global warming. But these inquiries are also part of the wider commodification of climate science. Venture capitalists are injecting hundreds of millions of dollars into climate intelligence as they build out a rapidly growing business of climate analytics — the data, risk models, tailored analyses and insights people and institutions need to understand and respond to climate risks.

I point companies to our freely available data and code at the Dartmouth Climate Modeling and Impacts Group, which I run, but turn down additional requests for customized assessments. I regard climate information as a public good and fear contributing to a world in which information about the unfolding risks of droughts, floods, wildfires, extreme heat and rising seas are hidden behind paywalls. People and companies who can afford private risk assessments will rent, buy and establish homes and businesses in safer places than the billions of others who can’t, compounding disadvantage and leaving the most vulnerable among us exposed.

Despite this, global consultants, climate and agricultural technology start-ups, insurance companies and major financial firms are all racing to meet the ballooning demand for information about climate dangers and how to prepare for them. While a lot of this information is public, it is often voluminous, technical and not particularly useful for people trying to evaluate their personal exposure. Private risk assessments fill that gap — but at a premium. The climate risk analytics market is expected to grow to more than $4 billion globally by 2027.

I don’t mean to suggest that the private sector should not be involved in furnishing climate information. That’s not realistic. But I worry that an overreliance on the private sector to provide climate adaptation information will hollow out publicly provided climate risk science, and that means we all will pay: the well-off with money, the poor with lives…(More)”.

Representative Bodies in the Age of AI


Report by POPVOX: “The report tracks current developments in the U.S. Congress and internationally, while assessing the prospects for future innovations. The report also serves as a primer for those in Congress on AI technologies and methods in an effort to promote responsible use and adoption. POPVOX endorses a considered, step-wise strategy for AI experimentation, underscoring the importance of capacity building, data stewardship, ethical frameworks, and insights gleaned from global precedents of AI in parliamentary functions. This ensures AI solutions are crafted with human discernment and supervision at their core.

Legislatures worldwide are progressively embracing AI tools such as machine learning, natural language processing, and computer vision to refine the precision, efficiency, and, to a small extent, the participatory aspects of their operations. The advent of generative AI platforms, such as ChatGPT, which excel in interpreting and organizing textual data, marks a transformative shift for the legislative process, inherently a task of converting rules into language.

While nations such as Brazil, India, Italy, and Estonia lead with applications ranging from the transcription and translation of parliamentary proceedings to enhanced bill drafting and sophisticated legislative record searches, the U.S. Congress is prudently venturing into the realm of Generative AI. The House and Senate have initiated AI working groups and secured licenses for platforms like ChatGPT. They have also issued guidance on responsible use…(More)”.

Experts in Government


Book by Donald F. Kettl: “From Caligula and the time of ancient Rome to the present, governments have relied on experts to manage public programs. But with that expertise has come power, and that power has long proven difficult to hold accountable. The tension between experts in the bureaucracy and the policy goals of elected officials, however, remains a point of often bitter tension. President Donald Trump labeled these experts as a ‘deep state’ seeking to resist the policies he believed he was elected to pursue—and he developed a policy scheme to make it far easier to fire experts he deemed insufficiently loyal. The age-old battles between expertise and accountability have come to a sharp point, and resolving these tensions requires a fresh look at the rule of law to shape the role of experts in governance…(More)”.

Facts over fiction: Why we must protect evidence-based knowledge if we value democracy


Article by Ben Feringa and Paul Nurse: “Central to human progress are three interconnected pillars. The first is pursuit of knowledge, a major component of which is the expansion of the frontiers of learning and understanding – something often achieved through science, driven by the innate curiosity of scientists.

The second pillar of progress is the need for stable democracies where people and ideas can mix freely. It is this free exchange of diverse perspectives that fuels the democratic process, ensuring policies are shaped by a multitude of voices and evidence, leading to informed decision-making that benefits all of society.

Such freedom of speech and expression also serves as the bedrock for scientific inquiry, allowing researchers to challenge prevailing notions without fear, fostering discovery, applications and innovation.

The third pillar is a fact-based worldview. While political parties might disagree on policy, for democracy to work well all of them should support and protect a perspective that is grounded in reliable facts, which are needed to generate reliable policies that can drive human progress….(More)”.

Testing the Assumptions of the Data Revolution


Report by TRENDS: “Ten years have passed since the release of A World that Counts and the formal adoption of the Sustainable Development Goals (SDGs). This seems an appropriate time for national governments and the global data community to reflect on where progress has been made so far. 

This report supports this objective in three ways: it evaluates the assumptions that underpin A World that Counts’ core hypothesis that the data revolution would lead to better outcomes across the 17 SDGs, it summarizes where and how we have made progress, and it identifies knowledge gaps related to each assumption. These knowledge gaps will serve as the foundation for the next phase of the SDSN TReNDS research program, guiding our exploration of emerging data-driven paradigms and their implications for the SDGs. By analyzing these assumptions, we can consider how SDSN TReNDs and other development actors might adapt their activities to a new set of circumstances in the final six years of the SDG commitments.

Given that the 2030 Agenda established a 15-year timeframe for SDG attainment, it is to be expected that some of A World that Counts’ key assumptions would fall short or require recalibration along the way. Unforeseen events such as the COVID-19 pandemic would inevitably shift global attention and priorities away from the targets set out in the SDG framework, at least temporarily…(More)”.

Tackling Today’s Data Dichotomy: Unveiling the Paradox of Abundant Supply and Restricted Access in the Quest for Social Equity


Article by Stefaan Verhulst: “…One of the ironies of this moment, however, is that an era of unprecedented supply is simultaneously an era of constricted access to data. Much of the data we generate is privately “owned,” hidden away in private or public sector silos, or otherwise inaccessible to those who are most likely to benefit from it or generate valuable insights. These restrictions on access are grafted onto existing socioeconomic inequalities, driven by broader patterns of exclusion and marginalization, and also exacerbating them. Critically, restricted or unequal access to data does not only harm individuals: it causes untold public harm by limiting the potential of data to address social ills. It also limits attempts to improve the output of AI both in terms of bias and trustworthiness.

In this paper, we outline two potential approaches that could help address—or at least mitigate—the harms: social licensing and a greater role for data stewards. While not comprehensive solutions, we believe that these represent two of the most promising avenues to introduce greater efficiencies into how data is used (and reused), and thus lead to more targeted, responsive, and responsible policymaking…(page 22-25)”.

What does it mean to trust a technology?


Article by Jack Stilgoe: “A survey published in October 2023 revealed what seemed to be a paradox. Over the past decade, self-driving vehicles have improved immeasurably, but public trust in the technology is low and falling. Only 37% of Americans said they would be comfortable riding in a self- driving vehicle, down from 39% in 2022 and 41% in 2021. Those that have used the technology express more enthusiasm, but the rest have seemingly had their confidence shaken by the failure of the technology to live up to its hype.

Purveyors and regulators of any new technology are likely to worry about public trust. In the short term, they worry that people won’t want to make use of new innovations. But they also worry that a public backlash might jeopardize not just a single company but a whole area of technological innovation. Excitement about artificial intelligence (AI) has been accompanied by a concern about the need to “build trust” in the technology. Trust—letting one’s guard down despite incomplete information—is vital, but innovators must not take it for granted. Nor can it be circumvented through clever engineering. When cryptocurrency enthusiasts call their technology “trustless” because they think it solves age-old problems of banking (an unavoidably imperfect social institution), we should at least view them with skepticism.

For those concerned about public trust and new technologies, social science has some important lessons. The first is that people trust people, not things. When we board an airplane or agree to get vaccinated, we are placing our trust not in these objects but in the institutions that govern them. We trust that professionals are well-trained; we trust that regulators have assessed the risks; we trust that, if something goes wrong, someone will be held accountable, harms will be compensated, and mistakes will be rectified. Societies can no longer rely on the face-to-face interactions that once allowed individuals to do business. So it is more important than ever that faceless institutions are designed and continuously monitored to realize the benefits of new technologies while mitigating the risks….(More)”.

The new star wars over satellites


Article by Peggy Hollinger: “There is a battle brewing in space. In one corner you have the billionaires building giant satellite broadband constellations in low earth orbit (LEO) — Elon Musk with SpaceX’s Starlink and Jeff Bezos with Project Kuiper. 

In the other corner stand the traditional fixed satellite operators such as ViaSat and SES — but also a number of nations increasingly uncomfortable with the way in which the new space economy is evolving. In other words, with the dominance of US mega constellations in a strategic region of space.

The first shots were fired in late November at the World Radiocommunications Conference in Dubai. Every four years, global regulators and industry meet to review international regulations on the use of radio spectrum. 

For those who have only a vague idea of what spectrum is, it is the name for the radio airwaves that carry data wirelessly to enable a vast range of services — from television broadcasting to WiFi, navigation to mobile communications.

Most people are inclined to think that the airwaves have infinite capacity to connect us. But, like water, spectrum is a finite resource and much of it has already been allocated to specific uses. So operators have to transmit signals on shared bands of spectrum — on the promise that their transmissions will not interfere with others. 

Now SpaceX, Kuiper and others operating in LEO are pushing to loosen rules designed to prevent their signals from interfering with those of traditional operators in higher orbits. These rules impose caps on the power used to transmit signals, which facilitate spectrum sharing but also constrain the amount of data they can send. LEO operators say the rules, designed 25 years ago, are outdated. They argue that new technology would allow higher power levels — and greater capacity for customers — without degrading networks of the traditional fixed satellite systems operating in geostationary orbit, at altitudes of 36,000km.

It is perhaps not a surprise that a proposal to make LEO constellations more competitive drew protests from geo operators. Some, such as US-based Hughes Network Systems, have admitted they are already losing customers to Starlink.

What was surprising, however, was the strong opposition from countries such as Brazil, Indonesia, Japan and others…(More)”.

Where Did the Open Access Movement Go Wrong?


An Interview with Richard Poynder by Richard Anderson: “…Open access was intended to solve three problems that have long blighted scholarly communication – the problems of accessibilityaffordability, and equity. 20+ years after the Budapest Open Access Initiative (BOAI) we can see that the movement has signally failed to solve the latter two problems. And with the geopolitical situation deteriorating solving the accessibility problem now also looks to be at risk. The OA dream of “universal open access” remains a dream and seems likely to remain one.

What has been the essence of the OA movement’s failure?

The fundamental problem was that OA advocates did not take ownership of their own movement. They failed, for instance, to establish a central organization (an OA foundation, if you like) in order to organize and better manage the movement; and they failed to publish a single, canonical definition of open access. This is in contrast to the open source movement, and is an omission I drew attention to in 2006

This failure to take ownership saw responsibility for OA pass to organizations whose interests are not necessarily in sync with the objectives of the movement.

It did not help that the BOAI definition failed to specify that to be classified as open access, scholarly works needed to be made freely available immediately on publication and that they should remain freely available in perpetuity. Nor did it give sufficient thought to how OA would be funded (and OA advocates still fail to do that).

This allowed publishers to co-opt OA for their own purposes, most notably by introducing embargoes and developing the pay-to-publish gold OA model, with its now infamous article processing charge (APC).

Pay-to-publish OA is now the dominant form of open access and looks set to increase the cost of scholarly publishing and so worsen the affordability problem. Amongst other things, this has disenfranchised unfunded researchers and those based in the global south (notwithstanding APC waiver promises).

What also did not help is that OA advocates passed responsibility for open access over to universities and funders. This was contradictory, because OA was conceived as something that researchers would opt into. The assumption was that once the benefits of open access were explained to them, researchers would voluntarily embrace it – primarily by self-archiving their research in institutional or preprint repositories. But while many researchers were willing to sign petitions in support of open access, few (outside disciplines like physics) proved willing to practice it voluntarily.

In response to this lack of engagement, OA advocates began to petition universities, funders, and governments to introduce OA policies recommending that researchers make their papers open access. When these policies also failed to have the desired effect, OA advocates demanded their colleagues be forced to make their work OA by means of mandates requiring them to do so.

Most universities and funders (certainly in the global north) responded positively to these calls, in the belief that open access would increase the pace of scientific development and allow them to present themselves as forward-thinking, future-embracing organizations. Essentially, they saw it as a way of improving productivity and ROI while enhancing their public image.

While many researchers were willing to sign petitions in support of open access, few proved willing to practice it voluntarily.

But in light of researchers’ continued reluctance to make their works open access, universities and funders began to introduce increasingly bureaucratic rules, sanctions, and reporting tools to ensure compliance, and to manage the more complex billing arrangements that OA has introduced.

So, what had been conceived as a bottom-up movement founded on principles of voluntarism morphed into a top-down system of command and control, and open access evolved into an oppressive bureaucratic process that has failed to address either the affordability or equity problems. And as the process, and the rules around that process, have become ever more complex and oppressive, researchers have tended to become alienated from open access.

As a side benefit for universities and funders OA has allowed them to better micromanage their faculty and fundees, and to monitor their publishing activities in ways not previously possible. This has served to further proletarianize researchers and today they are becoming the academic equivalent of workers on an assembly line. Philip Mirowski has predicted that open access will lead to the deskilling of academic labor. The arrival of generative AI might seem to make that outcome the more likely…

Can these failures be remedied by means of an OA reset? With this aim in mind (and aware of the failures of the movement), OA advocates are now devoting much of their energy to trying to persuade universities, funders, and philanthropists to invest in a network of alternative nonprofit open infrastructures. They envisage these being publicly owned and focused on facilitating a flowering of new diamond OA journals, preprint servers, and Publish, Review, Curate (PRC) initiatives. In the process, they expect commercial publishers will be marginalized and eventually dislodged.

But it is highly unlikely that the large sums of money that would be needed to create these alternative infrastructures will be forthcoming, certainly not at sufficient levels or on anything other than a temporary basis.

While it is true that more papers and preprints are being published open access each year, I am not convinced this is taking us down the road to universal open access, or that there is a global commitment to open access.

Consequently, I do not believe that a meaningful reset is possible: open access has reached an impasse and there is no obvious way forward that could see the objectives of the OA movement fulfilled.

Partly for this reason, we are seeing attempts to rebrand, reinterpret, and/or reimagine open access and its objectives…(More)”.

Rebalancing AI


Article by Daron Acemoglu and Simon Johnson: “Optimistic forecasts regarding the growth implications of AI abound. AI adoption could boost productivity growth by 1.5 percentage points per year over a 10-year period and raise global GDP by 7 percent ($7 trillion in additional output), according to Goldman Sachs. Industry insiders offer even more excited estimates, including a supposed 10 percent chance of an “explosive growth” scenario, with global output rising more than 30 percent a year.

All this techno-optimism draws on the “productivity bandwagon”: a deep-rooted belief that technological change—including automation—drives higher productivity, which raises net wages and generates shared prosperity.

Such optimism is at odds with the historical record and seems particularly inappropriate for the current path of “just let AI happen,” which focuses primarily on automation (replacing people). We must recognize that there is no singular, inevitable path of development for new technology. And, assuming that the goal is to sustainably improve economic outcomes for more people, what policies would put AI development on the right path, with greater focus on enhancing what all workers can do?…(More)”