Power and Governance in the Age of AI


Reflections by several experts: “The best way to think about ChatGPT is as the functional equivalent of expensive private education and tutoring. Yes, there is a free version, but there is also a paid subscription that gets you access to the latest breakthroughs and a more powerful version of the model. More money gets you more power and privileged access. As a result, in my courses at Middlebury College this spring, I was obliged to include the following statement in my syllabus:

“Policy on the use of ChatGPT: You may all use the free version however you like and are encouraged to do so. For purposes of equity, use of the subscription version is forbidden and will be considered a violation of the Honor Code. Your professor has both versions and knows the difference. To ensure you are learning as much as possible from the course readings, careful citation will be mandatory in both your informal and formal writing.”

The United States fails to live up to its founding values when it supports a luxury brand-driven approach to educating its future leaders that is accessible to the privileged and a few select lottery winners. One such “winning ticket” student in my class this spring argued that the quality-education-for-all issue was of such importance for the future of freedom that he would trade his individual good fortune at winning an education at Middlebury College for the elimination of ALL elite education in the United States so that quality education could be a right rather than a privilege.

A democracy cannot function if the entire game seems to be rigged and bought by elites. This is true for the United States and for democracies in the making or under challenge around the world. Consequently, in partnership with other liberal democracies, the U.S. government must do whatever it can to render both public and private governance more transparent and accountable. We should not expect authoritarian states to help us uphold liberal democratic values, nor should we expect corporations to do so voluntarily…(More)”.

Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging


Article by Peter Swire and Samm Sacks: “A new executive order issued today contains multiple provisions, most notably limiting bulk sales of personal data to “countries of concern.” The order has admirable national security goals but quite possibly would be ineffective and may be counterproductive. There are serious questions about both the substance and the messaging of the order. 

The new order combines two attractive targets for policy action. First, in this era of bipartisan concern about China, the new order would regulate transactions specifically with “countries of concern,” notably China, but also others such as Iran and North Korea. A key rationale for the order is to prevent China from amassing sensitive information about Americans, for use in tracking and potentially manipulating military personnel, government officials, or anyone else of interest to the Chinese regime. 

Second, the order targets bulk sales, to countries of concern, of sensitive personal information by data brokers, such as genomic, biometric, and precise geolocation data. The large and growing data broker industry has come under well-deserved bipartisan scrutiny for privacy risks. Congress has held hearings and considered bills to regulate such brokers. California has created a data broker registry and last fall passed the Delete Act to enable individuals to require deletion of their personal data. In January, the Federal Trade Commission issued an order prohibiting data broker Outlogic from sharing or selling sensitive geolocation data, finding that the company had acted without customer consent, in an unfair and deceptive manner. In light of these bipartisan concerns, a new order targeting both China and data brokers has a nearly irresistible political logic.

Accurate assessment of the new order, however, requires an understanding of this order as part of a much bigger departure from the traditional U.S. support for free and open flows of data across borders. Recently, in part for national security reasons, the U.S. has withdrawn its traditional support in the World Trade Organization (WTO) for free and open data flows, and the Department of Commerce has announced a proposed rule, in the name of national security, that would regulate U.S.-based cloud providers when selling to foreign countries, including for purposes of training artificial intelligence (AI) models. We are concerned that these initiatives may not sufficiently account for the national security advantages of the long-standing U.S. position and may have negative effects on the U.S. economy.

Despite the attractiveness of the regulatory targets—data brokers and countries of concern—U.S. policymakers should be cautious as they implement this order and the other current policy changes. As discussed below, there are some possible privacy advances as data brokers have to become more careful in their sales of data, but a better path would be to ensure broader privacy and cybersecurity safeguards to better protect data and critical infrastructure systems from sophisticated cyberattacks from China and elsewhere…(More)”.

Once upon a bureaucrat: Exploring the role of stories in government


Article by Thea Snow: “When you think of a profession associated with stories, what comes to mind? Journalist, perhaps? Or author? Maybe, at a stretch, you might think about a filmmaker. But I would hazard a guess that “public servant” would unlikely be one of the first professions that come to mind. However, recent research suggests that we should be thinking more deeply about the connections between stories and government.

Since 2021, the Centre for Public Impact, in partnership with Dusseldorp Forum and Hands Up Mallee, has been exploring the role of storytelling in the context of place-based systems change work. Our first report, Storytelling for Systems Change: Insights from the Field, focused on the way communities use stories to support place-based change. Our second report, Storytelling for Systems Change: Listening to Understand, focused more on how stories are perceived and used by those in government who are funding and supporting community-led systems change initiatives.

To shape these reports, we have spent the past few years speaking to community members, collective impact backbone teams, storytelling experts, academics, public servants, data analysts, and more. Here’s some of what we’ve heard…(More)”.

Mark the good stuff: Content provenance and the fight against disinformation


BBC Blog: “BBC News’s Verify team is a dedicated group of 60 journalists who fact-check, verify video, counter disinformation, analyse data and – crucially – explain complex stories in the pursuit of truth. On Monday, March 4th, Verify published their first article using a new open media provenance technology called C2PA. The C2PA standard is a technology that records digitally signed information about the provenance of imagery, video and audio – information (or signals) that shows where a piece of media has come from and how it’s been edited. Like an audit trail or a history, these signals are called ‘content credentials’.

Content credentials can be used to help audiences distinguish between authentic, trustworthy media and content that has been faked. The digital signature attached to the provenance information ensures that when the media is “validated”, the person or computer reading the image can be sure that it came from the BBC (or any other source with its own x.509 certificate).

This is important for two reasons. First, it gives publishers like the BBC the ability to share transparently with our audiences what we do every day to deliver great journalism. It also allows us to mark content that is shared across third party platforms (like Facebook) so audiences can trust that when they see a piece of BBC content it does in fact come from the BBC.

For the past three years, BBC R&D has been an active partner in the development of the C2PA standard. It has been developed in collaboration with major media and technology partners, including Microsoft, the New York Times and Adobe. Membership in C2PA is growing to include organisations from all over the world, from established hardware manufacturers like Canon, to technology leaders like OpenAI, fellow media organisations like NHK, and even the Publicis Group covering the advertising industry. Google has now joined the C2PA steering committee and social media companies are leaning in too: Meta has recently announced they are actively assessing implementing C2PA across their platforms…(More)”.

The AI data scraping challenge:  How can we proceed responsibly?


Article by Lee Tiedrich: “Society faces an urgent and complex artificial intelligence (AI) data scraping challenge.  Left unsolved, it could threaten responsible AI innovation.  Data scraping refers to using web crawlers or other means to obtain data from third-party websites or social media properties.  Today’s large language models (LLMs) depend on vast amounts of scraped data for training and potentially other purposes.  Scraped data can include facts, creative content, computer code, personal information, brands, and just about anything else.  At least some LLM operators directly scrape data from third-party sites.  Common CrawlLAION, and other sites make scraped data readily accessible.  Meanwhile, Bright Data and others offer scraped data for a fee. 

In addition to fueling commercial LLMs, scraped data can provide researchers with much-needed data to advance social good.  For instance, Environmental Journal explains how scraped data enhances sustainability analysis.  Nature reports that scraped data improves research about opioid-related deaths.  Training data in different languages can help make AI more accessible for users in Africa and other underserved regions.  Access to training data can even advance the OECD AI Principles by improving safety and reducing bias and other harms, particularly when such data is suitable for the AI system’s intended purpose…(More)”.

Societal challenges and big qualitative data require a new era of methodological pragmatism


Blog by Alex Gillespie, Vlad Glăveanu, and Constance de Saint-Laurent: “The ‘classic’ methods we use today in psychology and the social sciences might seem relatively fixed, but they are the product of collective responses to concerns within a historical context. The 20th century methods of questionnaires and interviews made sense in a world where researchers did not have access to what people did or said, and even if they did, could not analyse it at scale. Questionnaires and interviews were suited to 20th century concerns (shaped by colonialism, capitalism, and the ideological battles of the Cold War) for understanding, classifying, and mapping opinions and beliefs.

However, what social scientists are faced with today is different due to the culmination of two historical trends. The first has to do with the nature of the problems we face. Inequalities, the climate emergency and current wars are compounded by a general rise in nationalism, populism, and especially post-truth discourses and ideologies. Nationalism and populism are not new, but the scale and sophistication of misinformation threatens to undermine collective responses to collective problems.

It is often said that we live in the age of ‘big data’, but what is less often said is that this is in fact the age of ‘big qualitative data’.

The second trend refers to technology and its accelerated development, especially the unprecedented accumulation of naturally occurring data (digital footprints) combined with increasingly powerful methods for data analysis (traditional and generative AI). It is often said that we live in the age of ‘big data’, but what is less often said is that this is in fact the age of ‘big qualitative data’. The biggest datasets are unstructured qualitative data (each minute adds 2.5 million Google text searches, 500 thousand photos on Snapchat, 500 hours of YouTube videos) and the most significant AI advances leverage this qualitative data and make it tractable for social research.

These two trends have been fuelling the rise in mixed methods research…(More)” (See also their new book ‘Pragmatism and Methodology’ (open access)

Evaluating LLMs Through a Federated, Scenario-Writing Approach


Article by Bogdana “Bobi” Rakova: “What do screenwriters, AI builders, researchers, and survivors of gender-based violence have in common? I’d argue they all imagine new, safe, compassionate, and empowering approaches to building understanding.

In partnership with Kwanele South Africa, I lead an interdisciplinary team, exploring this commonality in the context of evaluating large language models (LLMs) — more specifically, chatbots that provide legal and social assistance in a critical context. The outcomes of our engagement are a series of evaluation objectives and scenarios that contribute to an evaluation protocol with the core tenet that when we design for the most vulnerable, we create better futures for everyone. In what follows I describe our process. I hope this methodological approach and our early findings will inspire other evaluation efforts to meaningfully center the margins in building more positive futures that work for everyone…(More)”

Why Do Universities Ignore Good Ideas?


Article by Jeffrey Funk: “Here is a recent assessment of 2023 Nobel Prize Winner Katalin Kariko:

“Eight current and former colleagues of Karikó told The Daily Pennsylvanian that — over the course of three decades — the university repeatedly shunned Karikó and her research, despite its groundbreaking potential.”

Another article claims that this occurred because she could not get the financial support to continue her research.

Why couldn’t she get financial support? “You’re more likely to get grants if you’re a tenured faculty member, but you’re more likely to get promoted to tenure if you get grants,” said Eric Feigl-Ding, an epidemiologist at the New England Complex Systems Institute and a former faculty member and researcher at Harvard Medical School. “There is a vicious cycle,” he says.

Interesting. So, the idea doesn’t matter. What matters to funding agencies is that you have previously obtained funding or are a tenured professor. Really? Are funding agencies this narrow-minded?

Mr. Feigl-Ding also said, “Universities also tend to look at how much a researcher publishes, or how widely covered by the media their work is, as opposed to how innovative the research is.” But why couldn’t Karikó get published?

Science magazine tells the story of her main paper with Drew Weismann in 2005. After being rejected by Nature within 24 hours: “It was similarly rejected by Science and by Cell, and the word incremental kept cropping up in the editorial staff comments.”

Incremental? There are more than two million papers published each year, and this research, for which Karikó and Weismann won a Nobel Prize, was deemed incremental? If it had been rejected for methods or for the contents being impossible to believe, I think most people could understand the rejection. But incremental?

Obviously, most of the two million papers published each year are really incremental. Yet one of the few papers that we can all agree was not incremental, gets rejected because it was deemed incremental.

Furthermore, this is happening in a system of science in which even Nature admits “disruptive science has declined,” few science-based technologies are being successfully commercialized, and Nature admits that it doesn’t understand why…(More)”.

Public sector capacity matters, but what is it?


Blog by Rainer Kattel, Marriana Mazzucato, Rosie Collington, Fernando Fernandez-Monge, Iacopo Gronchi, Ruth Puttick: “As governments turn increasingly to public sector innovations, challenges, missions and transformative policy initiatives, the need to understand and develop public sector capacities is ever more important. In IIPP’s project with Bloomberg Philanthropies to develop a Public Sector Capabilities Index, we propose to define public sector capacities through three inter-connected layers: state capacities, organisational capabilities, and dynamic capabilities of the public organisations.

The idea that governments should be able to design and deliver effective policies has existed ever since we had governments. A quick search in Google’s Ngram viewer shows that the use of state capacity in published books has experienced exponential growth since the late 1980s. It is, however, not a coincidence that focus on state and public sector capacities more broadly emerges in the shadow of new public management and neoliberal governance and policy reforms. Rather than understanding governance as a collaborative effort between all sectors, these reforms gave normative preference to business practices. Increasing focus on public sector capacity as a concept should thus be understood as an attempt to rebalance our understanding of how change happens in societies — through cross-sectoral co-creation — and as an effort to build the muscles in public organisations to work together to tackle socio-economic challenges.

We propose to define public sector capacities through three inter-connected layers: state capacities, organizational routines, and dynamic capabilities of the public organisations…(More)”.

Civic Trust: What’s In A Concept?


Article by Stefaan Verhulst, Andrew J. Zahuranec, Oscar Romero and Kim Ochilo: “We will only be able to improve civic trust once we know how to measure it…

A visualization of the ways to measure civic trust

Recently, there’s been a noticeable decline in trust toward institutions across different sectors of society. This is a serious issue, as evidenced by surveys including the Edelman Trust BarometerGallup, and Pew Research.

Diminishing trust presents substantial obstacles. It threatens to weaken the foundation of a pluralistic democracy, adversely affects public health, and hinders the collaboration needed to tackle worldwide challenges such as climate change. Trust forms the cornerstone of democratic social contracts and is crucial for maintaining the civic agreements essential for the prosperity and cohesion of communities, cities, and countries alike.

Yet to increase civic trust, we need to know what we mean by it and how to measure it, which turns out to be a challenging exercise. Toward that end, The GovLab at New York University and the New York Civic Engagement Commission joined forces to catalog and identify methodologies to quantify and understand the nuances of civic trust.

“Building trust across New York is essential if we want to deepen civic engagement,” said Sarah Sayeed, Chair and Executive Director of the Civic Engagement Commission. “Trust is the cornerstone of a healthy community and robust democracy.”

This blog delves into various strategies for developing metrics to measure civic trust, informed by our own desk research, which categorizes civic trust metrics into descriptive, diagnostic, and evaluative measures…(More)”.