After USTR’s Move, Global Governance of Digital Trade Is Fraught with Unknowns


Article by Patrick Leblond: “On October 25, the United States announced at the World Trade Organization (WTO) that it was dropping its support for provisions meant to promote the free flow of data across borders. Also abandoned were efforts to continue negotiations on international e-commerce, to protect the source code in applications and algorithms (the so-called Joint Statement Initiative process).

According to the Office of the US Trade Representative (USTR): “In order to provide enough policy space for those debates to unfold, the United States has removed its support for proposals that might prejudice or hinder those domestic policy considerations.” In other words, the domestic regulation of data, privacy, artificial intelligence, online content and the like, seems to have taken precedence over unhindered international digital trade, which the United States previously strongly defended in trade agreements such as the Trans-Pacific Partnership (TPP) and the Canada-United States-Mexico Agreement (CUSMA)…

One pathway for the future sees the digital governance noodle bowl getting bigger and messier. In this scenario, international digital trade suffers. Agreements continue proliferating but remain ineffective at fostering cross-border digital trade: either they remain hortatory with attempts at cooperation on non-strategic issues, or no one pays attention to the binding provisions because business can’t keep up and governments want to retain their “policy space.” After all, why has there not yet been any dispute launched based on binding provisions in a digital trade agreement (either on its own or as part of a larger trade deal) when there has been increasing digital fragmentation?

The other pathway leads to the creation of a new international standards-setting and governance body (call it an International Digital Standards Board), like there exists for banking and finance. Countries that are members of such an international organization and effectively apply the commonly agreed standards become part of a single digital area where they can conduct cross-border digital trade without impediments. This is the only way to realize the G7’s “data free flow with trust” vision, originally proposed by Japan…(More)”.

New York City Takes Aim at AI


Article by Samuel Greengard: “As concerns over artificial intelligence (AI) grow and angst about its potential impact increase, political leaders and government agencies are taking notice. In November, U.S. president Joe Biden issued an executive order designed to build guardrails around the technology. Meanwhile, the European Union (EU) is currently developing a legal framework around responsible AI.

Yet, what is often overlooked about artificial intelligence is that it’s more likely to impact people on a local level. AI touches housing, transportation, healthcare, policing and numerous other areas relating to business and daily life. It increasingly affects citizens, government employees, and businesses in both obvious and unintended ways.

One city attempting to position itself at the vanguard of AI is New York. In October 2023, New York City announced a blueprint for developing, managing, and using the technology responsibly. The New York City Artificial Intelligence Action Plan—the first of its kind in the U.S.—is designed to help officials and the public navigate the AI space.

“It’s a fairly comprehensive plan that addresses both the use of AI within city government and the responsible use of the technology,” says Clifford S. Stein, Wai T. Chang Professor of Industrial Engineering and Operations Research and Interim Director of the Data Science Institute at Columbia University.

Adds Stefaan Verhulst, co-founder and chief research and development officer at The GovLab and Senior Fellow at the Center for Democracy and Technology (CDT), “AI localism focuses on the idea that cities are where most of the action is in regard to AI.”…(More)”.

Boston experimented with using generative AI for governing. It went surprisingly well


Article by Santiago Garces and Stephen Goldsmith: “…we see the possible advances of generative AI as having the most potential. For example, Boston asked OpenAI to “suggest interesting analyses” after we uploaded 311 data. In response, it suggested two things: time series analysis by case time, and a comparative analysis by neighborhood. This meant that city officials spent less time navigating the mechanics of computing an analysis, and had more time to dive into the patterns of discrepancy in service. The tools make graphs, maps, and other visualizations with a simple prompt. With lower barriers to analyze data, our city officials can formulate more hypotheses and challenge assumptions, resulting in better decisions.

Not all city officials have the engineering and web development experience needed to run these tests and code. But this experiment shows that other city employees, without any STEM background, could, with just a bit of training, utilize these generative AI tools to supplement their work.

To make this possible, more authority would need to be granted to frontline workers who too often have their hands tied with red tape. Therefore, we encourage government leaders to allow workers more discretion to solve problems, identify risks, and check data. This is not inconsistent with accountability; rather, supervisors can utilize these same generative AI tools, to identify patterns or outliers—say, where race is inappropriately playing a part in decision-making, or where program effectiveness drops off (and why). These new tools will more quickly provide an indication as to which interventions are making a difference, or precisely where a historic barrier is continuing to harm an already marginalized community.  

Civic groups will be able to hold government accountable in new ways, too. This is where the linguistic power of large language models really shines: Public employees and community leaders alike can request that tools create visual process maps, build checklists based on a description of a project, or monitor progress compliance. Imagine if people who have a deep understanding of a city—its operations, neighborhoods, history, and hopes for the future—can work toward shared goals, equipped with the most powerful tools of the digital age. Gatekeepers of formerly mysterious processes will lose their stranglehold, and expediters versed in state and local ordinances, codes, and standards, will no longer be necessary to maneuver around things like zoning or permitting processes. 

Numerous challenges would remain. Public workforces would still need better data analysis skills in order to verify whether a tool is following the right steps and producing correct information. City and state officials would need technology partners in the private sector to develop and refine the necessary tools, and these relationships raise challenging questions about privacy, security, and algorithmic bias…(More)”

Science and the State 


Introduction to Special Issue by Alondra Nelson et al: “…Current events have thrown these debates into high relief. Pressing issues from the pandemic to anthropogenic climate change, and the new and old inequalities they exacerbate, have intensified calls to critique but also imagine otherwise the relationship between scientific and state authority. Many of the subjects and communities whose well-being these authorities claim to promote have resisted, doubted, and mistrusted technoscientific experts and government officials. How might our understanding of the relationship change if the perspectives and needs of those most at risk from state and/or scientific violence or neglect were to be centered? Likewise, the pandemic and climate change have reminded scientists and state officials that relations among states matter at home and in the world systems that support supply chains, fuel technology, and undergird capitalism and migration. How does our understanding of the relationship between science and the state change if we eschew the nationalist framing of the classic Mertonian formulation and instead account for states in different parts of the world, as well as trans-state relationships?
This special issue began as a yearlong seminar on Science and the State convened by Alondra Nelson and Charis Thompson at the Institute for Advanced Study in Princeton, New Jersey. During the 2020–21 academic year, seventeen scholars from four continents met on a biweekly basis to read, discuss, and interrogate historical and contemporary scholarship on the origins, transformations, and sociopolitical
consequences of different configurations of science, technology, and governance. Our group consisted of scholars from different disciplines, including sociology, anthropology, philosophy, economics, history, political science, and geography. Examining technoscientific expertise and political authority while experiencing the conditions of the pandemic exerted a heightened sense of the stakes concerned and forced us to rethink easy critiques of scientific knowledge and state power. Our affective and lived experiences of the pandemic posed questions about what good science and good statecraft could be. How do we move beyond a presumption of isomorphism between “good” states and “good” science to understand and study the uneven experiences and sometimes exploitative practices of different configurations of science and the state?…(More)”.

A Blueprint for Designing Better Digital Government Services


Article by Joe Lee: “Public perceptions about government and government service delivery are at an all-time low across the United States. Plagued government legacy systems—too often using outdated programming language—are struggling to hold up under the weight of increased demand, and IT modernization efforts are floundering at all levels of government. This is taking place against the backdrop of a rapidly digitizing world that places a premium on speedy, seamless, simple, and secure customer service.

Government’s “customers” typically confront a whiplash experience between accessing services from the private sector and government. If a customer doesn’t like the quality of service they get from a particular business, they can usually turn to any number of competitors; that same customer has no viable alternative to a service provided by government, regardless of the quality of that service.

When Governor Josh Shapiro took office earlier this year in Pennsylvania, the start of a new administration presented an opportunity to reexamine how the Commonwealth of Pennsylvania delivered services for residents and visitors. As veteran government technologist, Jennifer Pahlka, points out, government tends to be fixated on ensuring compliance with policies and procedures frequently at the expense of the people they serve. In other words, while government services may fulfill statutory and policy requirements, the speed, seamlessness, and simplicity in which that service is ultimately delivered to the end customer is oftentimes an afterthought.

There’s a chorus of voices in the growing public interest technology movement working to shift this stubborn paradigm to proactively and persistently center people at the heart of each interaction between government and the customer. In fact, Pennsylvania is part of a growing coalition of states transforming their digital services across the country. For Pennsylvania and so many states, the road to creating truly accessible digital services involves excavating a mountain of legacy systems and policies, changing cultural and organizational paradigms, and building a movement that puts people at the center of the problem…(More)”.

Overcoming the Challenges of Using Automated Technologies for Public Health Evidence Synthesis


Article by Lucy Hocking et al: “Many organisations struggle to keep pace with public health evidence due to the volume of published literature and length of time it takes to conduct literature reviews. New technologies that help automate parts of the evidence synthesis process can help conduct reviews more quickly and efficiently to better provide up-to-date evidence for public health decision making. To date, automated approaches have seldom been used in public health due to significant barriers to their adoption. In this Perspective, we reflect on the findings of a study exploring experiences of adopting automated technologies to conduct evidence reviews within the public health sector. The study, funded by the European Centre for Disease Prevention and Control, consisted of a literature review and qualitative data collection from public health organisations and researchers in the field. We specifically focus on outlining the challenges associated with the adoption of automated approaches and potential solutions and actions that can be taken to mitigate these. We explore these in relation to actions that can be taken by tool developers (e.g. improving tool performance and transparency), public health organisations (e.g. developing staff skills, encouraging collaboration) and funding bodies/the wider research system (e.g. researchers, funding bodies, academic publishers and scholarly journals)…(More)”

What causes such maddening bottlenecks in government? ‘Kludgeocracy.’


Article by Jennifer Pahlka: “Former president Donald Trump wants to “obliterate the deep state.” As a Democrat who values government, I am chilled by the prospect. But I sometimes partly agree with him.

Certainly, Trump and I are poles apart on the nature of the problem. His “deep state” evokes a shadowy cabal that doesn’t exist. What is true, however, is that red tape and misaligned gears frequently stymie progress on even the most straightforward challenges. Ten years ago, Steven M. Teles, a political science professor at Johns Hopkins University, coined the term “kludgeocracy” to describe the problem. Since then, it has only gotten worse.

Whatever you call it, the sprawling federal bureaucracy takes care of everything from the nuclear arsenal to the social safety net to making sure our planes don’t crash. Public servants do critical work; they should be honored, not disparaged.

Yet most of them are frustrated. I’ve spoken with staffers in a dozen federal agencies this year while rolling out my book about government culture and effectiveness. I heard over and over about rigid, maximalist interpretations of rules, regulations, policies and procedures that take precedence over mission. Too often acting responsibly in government has come to mean not acting at all.

Kludgeocracy Example No. 1: Within government, designers are working to make online forms and applications easier to use. To succeed, they need to do user research, most of which is supposed to be exempt from the data-collection requirements of the Paperwork Reduction Act. Yet compliance officers insist that designers send their research plans for approval by the White House Office of Information and Regulatory Affairs (OIRA) under the act. Countless hours can go into the preparation and internal approvals of a “package” for OIRA, which then might post the plans to the Federal Register for the fun-house-mirror purpose of collecting public input on a plan to collect public input. This can result in months of delay. Meanwhile, no input happens, and no paperwork gets reduced.

Kludgeocracy Example No. 2: For critical economic and national security reasons, Congress passed a law mandating the establishment of a center for scientific research. Despite clear legislative intent, work was bogged down for months when one agency applied a statute to prohibit a certain structure for the center and another applied a different statute to require that structure. The lawyers ultimately found a solution, but it was more complex and cumbersome than anyone had hoped for. All the while, the clock was ticking.

What causes such maddening bottlenecks? The problem is mainly one of culture and incentives. It could be solved if leaders in each branch — in good faith — took the costs seriously…(More)”.

Toward Equitable Innovation in Health and Medicine: A Framework 


Report by The National Academies: “Advances in biomedical science, data science, engineering, and technology are leading to high-pace innovation with potential to transform health and medicine. These innovations simultaneously raise important ethical and social issues, including how to fairly distribute their benefits and risks. The National Academies of Sciences, Engineering, and Medicine, in collaboration with the National Academy of Medicine, established the Committee on Creating a Framework for Emerging Science, Technology, and Innovation in Health and Medicine to provide leadership and engage broad communities in developing a framework for aligning the development and use of transformative technologies with ethical and equitable principles. The committees resulting report describes a governance framework for decisions throughout the innovation life cycle to advance equitable innovation and support an ecosystem that is more responsive to the needs of a broader range of individuals and is better able to recognize and address inequities as they arise…(More)”.

The battle over right to repair is a fight over your car’s data


Article by Ofer Tur-Sinai: “Cars are no longer just a means of transportation. They have become rolling hubs of data communication. Modern vehicles regularly transmit information wirelessly to their manufacturers.

However, as cars grow “smarter,” the right to repair them is under siege.

As legal scholars, we find that the question of whether you and your local mechanic can tap into your car’s data to diagnose and repair spans issues of property rights, trade secrets, cybersecurity, data privacy and consumer rights. Policymakers are forced to navigate this complex legal landscape and ideally are aiming for a balanced approach that upholds the right to repair, while also ensuring the safety and privacy of consumers…

Until recently, repairing a car involved connecting to its standard on-board diagnostics port to retrieve diagnostic data. The ability for independent repair shops – not just those authorized by the manufacturer – to access this information was protected by a state law in Massachusetts, approved by voters on Nov. 6, 2012, and by a nationwide memorandum of understanding between major car manufacturers and the repair industry signed on Jan. 15, 2014.

However, with the rise of telematics systems, which combine computing with telecommunications, these dynamics are shifting. Unlike the standardized onboard diagnostics ports, telematics systems vary across car manufacturers. These systems are often protected by digital locks, and circumventing these locks could be considered a violation of copyright law. The telematics systems also encrypt the diagnostic data before transmitting it to the manufacturer.

This reduces the accessibility of telematics information, potentially locking out independent repair shops and jeopardizing consumer choice – a lack of choice that can lead to increased costs for consumers….

One issue left unresolved by the legislation is the ownership of vehicle data. A vehicle generates all sorts of data as it operates, including location, diagnostic, driving behavior, and even usage patterns of in-car systems – for example, which apps you use and for how long.

In recent years, the question of data ownership has gained prominence. In 2015, Congress legislated that the data stored in event data recorders belongs to the vehicle owner. This was a significant step in acknowledging the vehicle owner’s right over specific datasets. However, the broader issue of data ownership in today’s connected cars remains unresolved…(More)”.

Democratic Policy Development using Collective Dialogues and AI


Paper by Andrew Konya, Lisa Schirch, Colin Irwin, Aviv Ovadya: “We design and test an efficient democratic process for developing policies that reflect informed public will. The process combines AI-enabled collective dialogues that make deliberation democratically viable at scale with bridging-based ranking for automated consensus discovery. A GPT4-powered pipeline translates points of consensus into representative policy clauses from which an initial policy is assembled. The initial policy is iteratively refined with the input of experts and the public before a final vote and evaluation. We test the process three times with the US public, developing policy guidelines for AI assistants related to medical advice, vaccine information, and wars & conflicts. We show the process can be run in two weeks with 1500+ participants for around $10,000, and that it generates policy guidelines with strong public support across demographic divides. We measure 75-81% support for the policy guidelines overall, and no less than 70-75% support across demographic splits spanning age, gender, religion, race, education, and political party. Overall, this work demonstrates an end-to-end proof of concept for a process we believe can help AI labs develop common-ground policies, governing bodies break political gridlock, and diplomats accelerate peace deals…(More)”.