Artificial Intelligence: A Threat to Climate Change, Energy Usage and Disinformation


Press Release: “Today, partners in the Climate Action Against Disinformation coalition released a report that maps the risks that artificial intelligence poses to the climate crisis.

Topline points:

  • AI systems require an enormous amount of energy and water, and consumption is expanding quickly. Estimates suggest a doubling in 5-10 years.
  • Generative AI has the potential to turbocharge climate disinformation, including climate change-related deepfakes, ahead of a historic election year where climate policy will be central to the debate. 
  • The current AI policy landscape reveals a concerning lack of regulation on the federal level, with minor progress made at the state level, relying on voluntary, opaque and unenforceable pledges to pause development, or provide safety with its products…(More)”.

The Dark World of Citation Cartels


Article by Domingo Docampo: “In the complex landscape of modern academe, the maxim “publish or perish” has been gradually evolving into a different mantra: “Get cited or your career gets blighted.” Citations are the new academic currency, and careers now firmly depend on this form of scholarly recognition. In fact, citation has become so important that it has driven a novel form of trickery: stealth networks designed to manipulate citations. Researchers, driven by the imperative to secure academic impact, resort to forming citation rings: collaborative circles engineered to artificially boost the visibility of their work. In doing so, they compromise the integrity of academic discourse and undermine the foundation of scholarly pursuit. The story of the modern “citation cartel” is not just a result of publication pressure. The rise of the mega-journal also plays a role, as do predatory journals and institutional efforts to thrive in global academic rankings.

Over the past decade, the landscape of academic research has been significantly altered by the sheer number of scholars engaging in scientific endeavors. The number of scholars contributing to indexed publications in mathematics has doubled, for instance. In response to the heightened demand for space in scientific publications, a new breed of publishing entrepreneur has seized the opportunity, and the result is the rise of mega-journals that publish thousands of articles annually. Mathematics, an open-access journal produced by the Multidisciplinary Digital Publishing Institute, published more than 4,763 articles in 2023, making up 9.3 percent of all publications in the field, according to the Web of Science. It has an impact factor of 2.4 and an article-influence measure of just 0.37, but, crucially, it is indexed with Clarivate’s Web of Science, Elsevier’s Scopus, and other indexers, which means its citations count toward a variety of professional metrics. (By contrast, the Annals of Mathematics, published by Princeton University, contained 22 articles last year, and has an impact factor of 4.9 and an article-influence measure of 8.3.)..(More)”

The Judicial Data Collaborative


About: “We enable collaborations between researchers, technical experts, practitioners and organisations to create a shared vocabulary, standards and protocols for open judicial data sets, shared infrastructure and resources to host and explain available judicial data.

The objective is to drive and sustain advocacy on the quality and limitations of Indian judicial data and engage the judicial data community to enable cross-learning among various projects…

Accessibility and understanding of judicial data are essential to making courts and tribunals more transparent, accountable and easy to navigate for litigants. In recent years, eCourts services and various Court and tribunals’ websites have made a large volume of data about cases available. This has expanded the window into judicial functioning and enabled more empirical research on the role of courts in the protection of citizen’s rights. Such research can also assist busy courts understand patterns of litigation and practice and can help engage across disciplines with stakeholders to improve functioning of courts.

Some pioneering initiatives in the judicial data landscape include research such as DAKSH’s database; annual India Justice Reports; and studies of court functioning during the pandemic and quality of eCourts data; open datasets including Development Data Lab’s Judicial Data Portal containing District & Taluka court cases (2010-2018) and platforms that collect them such as Justice Hub; and interactive databases such as the Vidhi JALDI Constitution Bench Pendency Project…(More)”.

Once upon a bureaucrat: Exploring the role of stories in government


Article by Thea Snow: “When you think of a profession associated with stories, what comes to mind? Journalist, perhaps? Or author? Maybe, at a stretch, you might think about a filmmaker. But I would hazard a guess that “public servant” would unlikely be one of the first professions that come to mind. However, recent research suggests that we should be thinking more deeply about the connections between stories and government.

Since 2021, the Centre for Public Impact, in partnership with Dusseldorp Forum and Hands Up Mallee, has been exploring the role of storytelling in the context of place-based systems change work. Our first report, Storytelling for Systems Change: Insights from the Field, focused on the way communities use stories to support place-based change. Our second report, Storytelling for Systems Change: Listening to Understand, focused more on how stories are perceived and used by those in government who are funding and supporting community-led systems change initiatives.

To shape these reports, we have spent the past few years speaking to community members, collective impact backbone teams, storytelling experts, academics, public servants, data analysts, and more. Here’s some of what we’ve heard…(More)”.

Understanding and Measuring Hype Around Emergent Technologies


Article by Swaptik Chowdhury and Timothy Marler: “Inaccurate or excessive hype surrounding emerging technologies can have several negative effects, including poor decisionmaking by both private companies and the U.S. government. The United States needs a comprehensive approach to understanding and assessing public discourse–driven hype surrounding emerging technologies, but current methods for measuring technology hype are insufficient for developing policies to manage it. The authors of this paper describe an approach to analyzing technology hype…(More)”.

Evidence for policy-makers: A matter of timing and certainty?


Article by Wouter Lammers et al: “This article investigates how certainty and timing of evidence introduction impact the uptake of evidence by policy-makers in collective deliberations. Little is known about how experts or researchers should time the introduction of uncertain evidence for policy-makers. With a computational model based on the Hegselmann–Krause opinion dynamics model, we simulate how policy-makers update their opinions in light of new evidence. We illustrate the use of our model with two examples in which timing and certainty matter for policy-making: intelligence analysts scouting potential terrorist activity and food safety inspections of chicken meat. Our computations indicate that evidence should come early to convince policy-makers, regardless of how certain it is. Even if the evidence is quite certain, it will not convince all policy-makers. Next to its substantive contribution, the article also showcases the methodological innovation that agent-based models can bring for a better understanding of the science–policy nexus. The model can be endlessly adapted to generate hypotheses and simulate interactions that cannot be empirically tested…(More)”.

A World Divided Over Artificial Intelligence


Article by Aziz Huq: “…Through multinational communiqués and bilateral talks, an international framework for regulating AI does seem to be coalescing. Take a close look at U.S. President Joe Biden’s October 2023 executive order on AI; the EU’s AI Act, which passed the European Parliament in December 2023 and will likely be finalized later this year; or China’s slate of recent regulations on the topic, and a surprising degree of convergence appears. They have much in common. These regimes broadly share the common goal of preventing AI’s misuse without restraining innovation in the process. Optimists have floated proposals for closer international management of AI, such as the ideas presented in Foreign Affairs by the geopolitical analyst Ian Bremmer and the entrepreneur Mustafa Suleyman and the plan offered by Suleyman and Eric Schmidt, the former CEO of Google, in the Financial Times in which they called for the creation of an international panel akin to the UN’s Intergovernmental Panel on Climate Change to “inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming.”

But these ambitious plans to forge a new global governance regime for AI may collide with an unfortunate obstacle: cold reality. The great powers, namely, China, the United States, and the EU, may insist publicly that they want to cooperate on regulating AI, but their actions point toward a future of fragmentation and competition. Divergent legal regimes are emerging that will frustrate any cooperation when it comes to access to semiconductors, the setting of technical standards, and the regulation of data and algorithms. This path doesn’t lead to a coherent, contiguous global space for uniform AI-related rules but to a divided landscape of warring regulatory blocs—a world in which the lofty idea that AI can be harnessed for the common good is dashed on the rocks of geopolitical tensions…(More)”.

The Limits of Data


Essay by C.Thi Nguyen: “…Right now, the language of policymaking is data. (I’m talking about “data” here as a concept, not as particular measurements.) Government agencies, corporations, and other policymakers all want to make decisions based on clear data about positive outcomes.  They want to succeed on the metrics—to succeed in clear, objective, and publicly comprehensible terms. But metrics and data are incomplete by their basic nature. Every data collection method is constrained and every dataset is filtered.

Some very important things don’t make their way into the data. It’s easier to justify health care decisions in terms of measurable outcomes: increased average longevity or increased numbers of lives saved in emergency room visits, for example. But there are so many important factors that are far harder to measure: happiness, community, tradition, beauty, comfort, and all the oddities that go into “quality of life.”

Consider, for example, a policy proposal that doctors should urge patients to sharply lower their saturated fat intake. This should lead to better health outcomes, at least for those that are easier to measure: heart attack numbers and average longevity. But the focus on easy-to-measure outcomes often diminishes the salience of other downstream consequences: the loss of culinary traditions, disconnection from a culinary heritage, and a reduction in daily culinary joy. It’s easy to dismiss such things as “intangibles.” But actually, what’s more tangible than a good cheese, or a cheerful fondue party with friends?…(More)”.

Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies


Article by Kashmir Hill: “Kenn Dahl says he has always been a careful driver. The owner of a software company near Seattle, he drives a leased Chevrolet Bolt. He’s never been responsible for an accident.

So Mr. Dahl, 65, was surprised in 2022 when the cost of his car insurance jumped by 21 percent. Quotes from other insurance companies were also high. One insurance agent told him his LexisNexis report was a factor.

LexisNexis is a New York-based global data broker with a “Risk Solutions” division that caters to the auto insurance industry and has traditionally kept tabs on car accidents and tickets. Upon Mr. Dahl’s request, LexisNexis sent him a 258-page “consumer disclosure report,” which it must provide per the Fair Credit Reporting Act.

What it contained stunned him: more than 130 pages detailing each time he or his wife had driven the Bolt over the previous six months. It included the dates of 640 trips, their start and end times, the distance driven and an accounting of any speeding, hard braking or sharp accelerations. The only thing it didn’t have is where they had driven the car.

On a Thursday morning in June for example, the car had been driven 7.33 miles in 18 minutes; there had been two rapid accelerations and two incidents of hard braking.

According to the report, the trip details had been provided by General Motors — the manufacturer of the Chevy Bolt. LexisNexis analyzed that driving data to create a risk score “for insurers to use as one factor of many to create more personalized insurance coverage,” according to a LexisNexis spokesman, Dean Carney. Eight insurance companies had requested information about Mr. Dahl from LexisNexis over the previous month.

“It felt like a betrayal,” Mr. Dahl said. “They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”..(More)”.

A Plan to Develop Open Science’s Green Shoots into a Thriving Garden


Article by Greg Tananbaum, Chelle Gentemann, Kamran Naim, and Christopher Steven Marcum: “…As it’s moved from an abstract set of principles about access to research and data into the realm of real-world activities, the open science movement has mirrored some of the characteristics of the open source movement: distributed, independent, with loosely coordinated actions happening in different places at different levels. Globally, many things are happening, often disconnected, but still interrelated: open science has sowed a constellation of thriving green shoots, not quite yet a garden, but all growing rapidly on arable soil.

Streamlining research processes, reducing duplication of efforts, and accelerating scientific discoveries could ensure that the fruits of open science processes and products are more accessible and equitably distributed.

It is now time to consider how much faster and farther the open science movement could go with more coordination. What efficiencies might be realized if disparate efforts could better harmonize across geographies, disciplines, and sectors? How would an intentional, systems-level approach to aligning incentives, infrastructure, training, and other key components of a rationally functioning research ecosystem advance the wider goals of the movement? Streamlining research processes, reducing duplication of efforts, and accelerating scientific discoveries could ensure that the fruits of open science processes and products are more accessible and equitably distributed…(More)”