Governing mediation in the data ecosystem: lessons from media governance for overcoming data asymmetries


Chapter by Stefaan Verhulst in Handbook of Media and Communication Governance edited by Manuel Puppis , Robin Mansell , and Hilde Van den Bulck: “The internet and the accompanying datafication were heralded to usher in a golden era of disintermediation. Instead, the modern data ecology witnessed a process of remediation, or ‘hyper-mediation’, resulting in governance challenges, many of which underlie broader socioeconomic difficulties. Particularly, the rise of data asymmetries and silos create new forms of scarcity and dominance with deleterious political, economic and cultural consequences. Responding to these challenges requires a new data governance framework, focused on unlocking data and developing a more data pluralistic ecosystem. We argue for regulation and policy focused on promoting data collaboratives, an emerging form of cross-sectoral partnership; and on the establishment of data stewards, individuals/groups tasked with managing and responsibly sharing organizations’ data assets. Some regulatory steps are discussed, along with the various ways in which these two emerging stakeholders can help alleviate data scarcities and their associated problems…(More)”

Civic Monitoring for Environmental Law Enforcement


Book by Anna Berti Suman: “This book presents a thought-provoking inquiry demonstrating how civic environmental monitoring can support law enforcement. It provides an in-depth analysis of applicable legal frameworks and conventions such as the Aarhus Convention, with an enlightening discussion on the civic right to contribute environmental information.

Civic Monitoring for Environmental Law Enforcement discusses multi- and interdisciplinary research into how civil society uses monitoring techniques to gather evidence of environmental issues. The book argues that civic monitoring is a constructive approach for finding evidence of environmental wrongdoings and for leveraging this evidence in different institutional fora, including judicial proceedings and official reporting for environmental protection agencies. It also reveals the challenges and implications associated with a greater reliance on civic monitoring practices by institutions and society at large.

Adopting original methodological approaches to drive inspiration for further research, this book is an invaluable resource for students and scholars of environmental governance and regulation, environmental law, politics and policy, and science and technology studies. It is also beneficial to civil society actors, civic initiatives, legal practitioners, and policymakers working in institutions engaged in the application of environmental law…(More)”

Using AI to Map Urban Change


Brief by Tianyuan Huang, Zejia Wu, Jiajun Wu, Jackelyn Hwang, Ram Rajagopal: “Cities are constantly evolving, and better understanding those changes facilitates better urban planning and infrastructure assessments and leads to more sustainable social and environmental interventions. Researchers currently use data such as satellite imagery to study changing urban environments and what those changes mean for public policy and urban design. But flaws in the current approaches, such as inadequately granular data, limit their scalability and their potential to inform public policy across social, political, economic, and environmental issues.

Street-level images offer an alternative source of insights. These images are frequently updated and high-resolution. They also directly capture what’s happening on a street level in a neighborhood or across a city. Analyzing street-level images has already proven useful to researchers studying socioeconomic attributes and neighborhood gentrification, both of which are essential pieces of information in urban design, sustainability efforts, and public policy decision-making for cities. Yet, much like other data sources, street-level images present challenges: accessibility limits, shadow and lighting issues, and difficulties scaling up analysis.

To address these challenges, our paper “CityPulse: Fine-Grained Assessment of Urban Change with Street View Time Series” introduces a multicity dataset of labeled street-view images and proposes a novel artificial intelligence (AI) model to detect urban changes such as gentrification. We demonstrate the change-detection model’s effectiveness by testing it on images from Seattle, Washington, and show that it can provide important insights into urban changes over time and at scale. Our data-driven approach has the potential to allow researchers and public policy analysts to automate and scale up their analysis of neighborhood and citywide socioeconomic change…(More)”.

This is AI’s brain on AI


Article by Alison Snyder Data to train AI models increasingly comes from other AI models in the form of synthetic data, which can fill in chatbots’ knowledge gaps but also destabilize them.

The big picture: As AI models expand in size, their need for data becomes insatiable — but high quality human-made data is costly, and growing restrictions on the text, images and other kinds of data freely available on the web are driving the technology’s developers toward machine-produced alternatives.

State of play: AI-generated data has been used for years to supplement data in some fields, including medical imaging and computer vision, that use proprietary or private data.

  • But chatbots are trained on public data collected from across the internet that is increasingly being restricted — while at the same time, the web is expected to be flooded with AI-generated content.

Those constraints and the decreasing cost of generating synthetic data are spurring companies to use AI-generated data to help train their models.

  • Meta, Google, Anthropic and others are using synthetic data — alongside human-generated data — to help train the AI models that power their chatbots.
  • Google DeepMind’s new AlphaGeometry 2 system that can solve math Olympiad problems is trained from scratch on synthetic data…(More)”

A.I. May Save Us, or May Construct Viruses to Kill Us


Article by Nicholas Kristof: “Here’s a bargain of the most horrifying kind: For less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.

That’s the conclusion of Jason Matheny, the president of the RAND Corporation, a think tank that studies security matters and other issues.

“It wouldn’t cost more to create a pathogen that’s capable of killing hundreds of millions of people versus a pathogen that’s only capable of killing hundreds of thousands of people,” Matheny told me.

In contrast, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response…

In the early 2000s, some of us worried about smallpox being reintroduced as a bioweapon if the virus were stolen from the labs in Atlanta and in Russia’s Novosibirsk region that retain the virus since the disease was eradicated. But with synthetic biology, now it wouldn’t have to be stolen.

Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with A.I. it could be easier and cheaper to refine the virus.

One reason biological weapons haven’t been much used is that they can boomerang. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare that targets particular races or ethnicities (probably imperfectly), which would make bioweapons much more useful. Alternatively, it might be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one had obtained that person’s DNA at a dinner or reception.

Assessments of ethnic-targeting research by China are classified, but they may be why the U.S. Defense Department has said that the most important long-term threat of biowarfare comes from China.

A.I. has a more hopeful side as well, of course. It holds the promise of improving education, reducing auto accidents, curing cancers and developing miraculous new pharmaceuticals.

One of the best-known benefits is in protein folding, which can lead to revolutionary advances in medical care. Scientists used to spend years or decades figuring out the shapes of individual proteins, and then a Google initiative called AlphaFold was introduced that could predict the shapes within minutes. “It’s Google Maps for biology,” Kent Walker, president of global affairs at Google, told me.

Scientists have since used updated versions of AlphaFold to work on pharmaceuticals including a vaccine against malaria, one of the greatest killers of humans throughout history.

So it’s unclear whether A.I. will save us or kill us first…(More)”.

Supporting Scientific Citizens


Article by Lisa Margonelli: “What do nuclear fusion power plants, artificial intelligence, hydrogen infrastructure, and drinking water recycled from human waste have in common? Aside from being featured in this edition of Issues, they all require intense public engagement to choose among technological tradeoffs, safety profiles, and economic configurations. Reaching these understandings requires researchers, engineers, and decisionmakers who are adept at working with the public. It also requires citizens who want to engage with such questions and can articulate what they want from science and technology.

This issue offers a glimpse into what these future collaborations might look like. To train engineers with the “deep appreciation of the social, cultural, and ethical priorities and implications of the technological solutions engineers are tasked with designing and deploying,” University of Michigan nuclear engineer Aditi Verma and coauthors Katie Snyder and Shanna Daly asked their first-year engineering students to codesign nuclear power plants in collaboration with local community members. Although traditional nuclear engineering classes avoid “getting messy,” Verma and colleagues wanted students to engage honestly with the uncertainties of the profession. In the process of working with communities, the students’ vocabulary changed; they spoke of trust, respect, and “love” for community—even when considering deep geological waste repositories…(More)”.

Is peer review failing its peer review?


Article by First Principles: “Ivan Oransky doesn’t sugar-coat his answer when asked about the state of academic peer review: “Things are pretty bad.”

As a distinguished journalist in residence at New York University and co-founder of Retraction Watch – a site that chronicles the growing number of papers being retracted from academic journals – Oransky is better positioned than just about anyone to make such a blunt assessment. 

He elaborates further, citing a range of factors contributing to the current state of affairs. These include the publish-or-perish mentality, chatbot ghostwriting, predatory journals, plagiarism, an overload of papers, a shortage of reviewers, and weak incentives to attract and retain reviewers.

“Things are pretty bad and they have been bad for some time because the incentives are completely misaligned,” Oranksy told FirstPrinciples in a call from his NYU office. 

Things are so bad that a new world record was set in 2023: more than 10,000 research papers were retracted from academic journals. In a troubling development, 19 journals closed after being inundated by a barrage of fake research from so-called “paper mills” that churn out the scientific equivalent of clickbait, and one scientist holds the current record of 213 retractions to his name. 

“The numbers don’t lie: Scientific publishing has a problem, and it’s getting worse,” Oransky and Retraction Watch co-founder Adam Marcus wrote in a recent opinion piece for The Washington Post. “Vigilance against fraudulent or defective research has always been necessary, but in recent years the sheer amount of suspect material has threatened to overwhelm publishers.”..(More)”.

Illuminating ‘the ugly side of science’: fresh incentives for reporting negative results


Article by Rachel Brazil: “Editor-in-chief Sarahanne Field describes herself and her team at the Journal of Trial & Error as wanting to highlight the “ugly side of science — the parts of the process that have gone wrong”.

She clarifies that the editorial board of the journal, which launched in 2020, isn’t interested in papers in which “you did a shitty study and you found nothing. We’re interested in stuff that was done methodologically soundly, but still yielded a result that was unexpected.” These types of result — which do not prove a hypothesis or could yield unexplained outcomes — often simply go unpublished, explains Field, who is also an open-science researcher at the University of Groningen in the Netherlands. Along with Stefan Gaillard, one of the journal’s founders, she hopes to change that.

Calls for researchers to publish failed studies are not new. The ‘file-drawer problem’ — the stacks of unpublished, negative results that most researchers accumulate — was first described in 1979 by psychologist Robert Rosenthal. He argued that this leads to publication bias in the scientific record: the gap of missing unsuccessful results leads to overemphasis on the positive results that do get published…(More)”.

Feeding the Machine: The Hidden Human Labor Powering A.I.


Book by Mark Graham, Callum Cant, and James Muldoon: “Silicon Valley has sold us the illusion that artificial intelligence is a frictionless technology that will bring wealth and prosperity to humanity. But hidden beneath this smooth surface lies the grim reality of a precarious global workforce of millions laboring under often appalling conditions to make A.I. possible. This book presents an urgent, riveting investigation of the intricate network that maintains this exploitative system, revealing the untold truth of A.I.

Based on hundreds of interviews and thousands of hours of fieldwork over more than a decade, Feeding the Machine describes the lives of the workers deliberately concealed from view, and the power structures that determine their future. It gives voice to the people whom A.I. exploits, from accomplished writers and artists to the armies of data annotators, content moderators and warehouse workers, revealing how their dangerous, low-paid labor is connected to longer histories of gendered, racialized, and colonial exploitation.

A.I. is an extraction machine that feeds off humanity’s collective effort and intelligence, churning through ever-larger datasets to power its algorithms. This book is a call to arms that details what we need to do to fight for a more just digital future…(More)”.

Rethinking Dual-Use Technology


Article by Artur Kluz and Stefaan Verhulst: “A new concept of “triple use” — where technology serves commercial, defense, and peacebuilding purposes — may offer a breakthrough solution for founders, investors and society to explore….

As a result of the resurgence of geopolitical tensions, the debate about the applications of dual-use technology is intensifying. The core issue founders, tech entrepreneurs, venture capitalists (VCs), and limited partner investors (LPs) are examining is whether commercial technologies should increasingly be re-used for military purposes. Traditionally, the majority of  investors (including limited partners) have prohibited dual-use tech in their agreements. However, the rapidly growing dual-use market, with its substantial addressable size and growth potential, is compelling all stakeholders to reconsider this stance. The pressure for innovations, capital returns and Return On Investment (ROI) is driving the need for a solution. 

These discussions are fraught with moral complexity, but they also present an opportunity to rethink the dual-use paradigm and foster investment in technologies aimed at supporting peace. A new concept of “triple use”— where technology serves commercial, defense, and peacebuilding purposes — may offer an innovative and more positive avenue for founders, investors and society to explore. This additional re-use, which remains in an incipient state, is increasingly being referred to as PeaceTech. By integrating terms dedicated to PeaceTech in new and existing investment and LP agreements, tech companies, founders and venture capital investors can be also required to apply their technology for peacebuilding purposes. This approach can expand the applications of emerging technologies to also include conflict prevention, reconstruction or any humanitarian aspects.

However, current efforts to use technologies for peacebuilding are impeded by various obstacles, including a lack of awareness within the tech sector and among investors, limited commercial interest, disparities in technical capacity, privacy concerns, international relations and political complexities. In the below we examine some of these challenges, while also exploring certain avenues for overcoming them — including approaching technologies for peace as a “triple use” application. We will especially try to identify examples of how tech companies, tech entrepreneurs, accelerators, and tech investors including VCs and LPs can commercially benefit and support “triple use” technologies. Ultimately, we argue, the vast potential — largely untapped — of “triple use” technologies calls for a new wave of tech ecosystem transformation and public and private investments as well as the development of a new field of research…(More)”.