How to make data open? Stop overlooking librarians


Article by Jessica Farrell: “The ‘Year of Open Science’, as declared by the US Office of Science and Technology Policy (OSTP), is now wrapping up. This followed an August 2022 memo from OSTP acting director Alondra Nelson, which mandated that data and peer-reviewed publications from federally funded research should be made freely accessible by the end of 2025. Federal agencies are required to publish full plans for the switch by the end of 2024.

But the specifics of how data will be preserved and made publicly available are far from being nailed down. I worked in archives for ten years and now facilitate two digital-archiving communities, the Software Preservation Network and BitCurator Consortium, at Educopia in Atlanta, Georgia. The expertise of people such as myself is often overlooked. More open-science projects need to integrate digital archivists and librarians, to capitalize on the tools and approaches that we have already created to make knowledge accessible and open to the public.How to make your scientific data accessible, discoverable and useful

Making data open and ‘FAIR’ — findable, accessible, interoperable and reusable — poses technical, legal, organizational and financial questions. How can organizations best coordinate to ensure universal access to disparate data? Who will do that work? How can we ensure that the data remain open long after grant funding runs dry?

Many archivists agree that technical questions are the most solvable, given enough funding to cover the labour involved. But they are nonetheless complex. Ideally, any open research should be testable for reproducibility, but re-running scripts or procedures might not be possible unless all of the required coding libraries and environments used to analyse the data have also been preserved. Besides the contents of spreadsheets and databases, scientific-research data can include 2D or 3D images, audio, video, websites and other digital media, all in a variety of formats. Some of these might be accessible only with proprietary or outdated software…(More)”.

After USTR’s Move, Global Governance of Digital Trade Is Fraught with Unknowns


Article by Patrick Leblond: “On October 25, the United States announced at the World Trade Organization (WTO) that it was dropping its support for provisions meant to promote the free flow of data across borders. Also abandoned were efforts to continue negotiations on international e-commerce, to protect the source code in applications and algorithms (the so-called Joint Statement Initiative process).

According to the Office of the US Trade Representative (USTR): “In order to provide enough policy space for those debates to unfold, the United States has removed its support for proposals that might prejudice or hinder those domestic policy considerations.” In other words, the domestic regulation of data, privacy, artificial intelligence, online content and the like, seems to have taken precedence over unhindered international digital trade, which the United States previously strongly defended in trade agreements such as the Trans-Pacific Partnership (TPP) and the Canada-United States-Mexico Agreement (CUSMA)…

One pathway for the future sees the digital governance noodle bowl getting bigger and messier. In this scenario, international digital trade suffers. Agreements continue proliferating but remain ineffective at fostering cross-border digital trade: either they remain hortatory with attempts at cooperation on non-strategic issues, or no one pays attention to the binding provisions because business can’t keep up and governments want to retain their “policy space.” After all, why has there not yet been any dispute launched based on binding provisions in a digital trade agreement (either on its own or as part of a larger trade deal) when there has been increasing digital fragmentation?

The other pathway leads to the creation of a new international standards-setting and governance body (call it an International Digital Standards Board), like there exists for banking and finance. Countries that are members of such an international organization and effectively apply the commonly agreed standards become part of a single digital area where they can conduct cross-border digital trade without impediments. This is the only way to realize the G7’s “data free flow with trust” vision, originally proposed by Japan…(More)”.

Shaping the Future: Indigenous Voices Reshaping Artificial Intelligence in Latin America


Blog by Enzo Maria Le Fevre Cervini: “In a groundbreaking move toward inclusivity and respect for diversity, a comprehensive report “Inteligencia artificial centrada en los pueblos indígenas: perspectivas desde América Latina y el Caribe” authored by Cristina Martinez and Luz Elena Gonzalez has been released by UNESCO, outlining the pivotal role of Indigenous perspectives in shaping the trajectory of Artificial Intelligence (AI) in Latin America. The report, a collaborative effort involving Indigenous communities, researchers, and various stakeholders, emphasizes the need for a fundamental shift in the development of AI technologies, ensuring they align with the values, needs, and priorities of Indigenous peoples.

The core theme of the report revolves around the idea that for AI to be truly respectful of human rights, it must incorporate the perspectives of Indigenous communities in Latin America, the Caribbean, and beyond. Recognizing the UNESCO Recommendation on the Ethics of Artificial Intelligence, the report highlights the urgency of developing a framework of shared responsibility among different actors, urging them to leverage their influence for the collective public interest.

While acknowledging the immense potential of AI in preserving Indigenous identities, conserving cultural heritage, and revitalizing languages, the report notes a critical gap. Many initiatives are often conceived externally, prompting a call to reevaluate these projects to ensure Indigenous leadership, development, and implementation…(More)”.

New York City Takes Aim at AI


Article by Samuel Greengard: “As concerns over artificial intelligence (AI) grow and angst about its potential impact increase, political leaders and government agencies are taking notice. In November, U.S. president Joe Biden issued an executive order designed to build guardrails around the technology. Meanwhile, the European Union (EU) is currently developing a legal framework around responsible AI.

Yet, what is often overlooked about artificial intelligence is that it’s more likely to impact people on a local level. AI touches housing, transportation, healthcare, policing and numerous other areas relating to business and daily life. It increasingly affects citizens, government employees, and businesses in both obvious and unintended ways.

One city attempting to position itself at the vanguard of AI is New York. In October 2023, New York City announced a blueprint for developing, managing, and using the technology responsibly. The New York City Artificial Intelligence Action Plan—the first of its kind in the U.S.—is designed to help officials and the public navigate the AI space.

“It’s a fairly comprehensive plan that addresses both the use of AI within city government and the responsible use of the technology,” says Clifford S. Stein, Wai T. Chang Professor of Industrial Engineering and Operations Research and Interim Director of the Data Science Institute at Columbia University.

Adds Stefaan Verhulst, co-founder and chief research and development officer at The GovLab and Senior Fellow at the Center for Democracy and Technology (CDT), “AI localism focuses on the idea that cities are where most of the action is in regard to AI.”…(More)”.

Boston experimented with using generative AI for governing. It went surprisingly well


Article by Santiago Garces and Stephen Goldsmith: “…we see the possible advances of generative AI as having the most potential. For example, Boston asked OpenAI to “suggest interesting analyses” after we uploaded 311 data. In response, it suggested two things: time series analysis by case time, and a comparative analysis by neighborhood. This meant that city officials spent less time navigating the mechanics of computing an analysis, and had more time to dive into the patterns of discrepancy in service. The tools make graphs, maps, and other visualizations with a simple prompt. With lower barriers to analyze data, our city officials can formulate more hypotheses and challenge assumptions, resulting in better decisions.

Not all city officials have the engineering and web development experience needed to run these tests and code. But this experiment shows that other city employees, without any STEM background, could, with just a bit of training, utilize these generative AI tools to supplement their work.

To make this possible, more authority would need to be granted to frontline workers who too often have their hands tied with red tape. Therefore, we encourage government leaders to allow workers more discretion to solve problems, identify risks, and check data. This is not inconsistent with accountability; rather, supervisors can utilize these same generative AI tools, to identify patterns or outliers—say, where race is inappropriately playing a part in decision-making, or where program effectiveness drops off (and why). These new tools will more quickly provide an indication as to which interventions are making a difference, or precisely where a historic barrier is continuing to harm an already marginalized community.  

Civic groups will be able to hold government accountable in new ways, too. This is where the linguistic power of large language models really shines: Public employees and community leaders alike can request that tools create visual process maps, build checklists based on a description of a project, or monitor progress compliance. Imagine if people who have a deep understanding of a city—its operations, neighborhoods, history, and hopes for the future—can work toward shared goals, equipped with the most powerful tools of the digital age. Gatekeepers of formerly mysterious processes will lose their stranglehold, and expediters versed in state and local ordinances, codes, and standards, will no longer be necessary to maneuver around things like zoning or permitting processes. 

Numerous challenges would remain. Public workforces would still need better data analysis skills in order to verify whether a tool is following the right steps and producing correct information. City and state officials would need technology partners in the private sector to develop and refine the necessary tools, and these relationships raise challenging questions about privacy, security, and algorithmic bias…(More)”

Science and the State 


Introduction to Special Issue by Alondra Nelson et al: “…Current events have thrown these debates into high relief. Pressing issues from the pandemic to anthropogenic climate change, and the new and old inequalities they exacerbate, have intensified calls to critique but also imagine otherwise the relationship between scientific and state authority. Many of the subjects and communities whose well-being these authorities claim to promote have resisted, doubted, and mistrusted technoscientific experts and government officials. How might our understanding of the relationship change if the perspectives and needs of those most at risk from state and/or scientific violence or neglect were to be centered? Likewise, the pandemic and climate change have reminded scientists and state officials that relations among states matter at home and in the world systems that support supply chains, fuel technology, and undergird capitalism and migration. How does our understanding of the relationship between science and the state change if we eschew the nationalist framing of the classic Mertonian formulation and instead account for states in different parts of the world, as well as trans-state relationships?
This special issue began as a yearlong seminar on Science and the State convened by Alondra Nelson and Charis Thompson at the Institute for Advanced Study in Princeton, New Jersey. During the 2020–21 academic year, seventeen scholars from four continents met on a biweekly basis to read, discuss, and interrogate historical and contemporary scholarship on the origins, transformations, and sociopolitical
consequences of different configurations of science, technology, and governance. Our group consisted of scholars from different disciplines, including sociology, anthropology, philosophy, economics, history, political science, and geography. Examining technoscientific expertise and political authority while experiencing the conditions of the pandemic exerted a heightened sense of the stakes concerned and forced us to rethink easy critiques of scientific knowledge and state power. Our affective and lived experiences of the pandemic posed questions about what good science and good statecraft could be. How do we move beyond a presumption of isomorphism between “good” states and “good” science to understand and study the uneven experiences and sometimes exploitative practices of different configurations of science and the state?…(More)”.

A Blueprint for Designing Better Digital Government Services


Article by Joe Lee: “Public perceptions about government and government service delivery are at an all-time low across the United States. Plagued government legacy systems—too often using outdated programming language—are struggling to hold up under the weight of increased demand, and IT modernization efforts are floundering at all levels of government. This is taking place against the backdrop of a rapidly digitizing world that places a premium on speedy, seamless, simple, and secure customer service.

Government’s “customers” typically confront a whiplash experience between accessing services from the private sector and government. If a customer doesn’t like the quality of service they get from a particular business, they can usually turn to any number of competitors; that same customer has no viable alternative to a service provided by government, regardless of the quality of that service.

When Governor Josh Shapiro took office earlier this year in Pennsylvania, the start of a new administration presented an opportunity to reexamine how the Commonwealth of Pennsylvania delivered services for residents and visitors. As veteran government technologist, Jennifer Pahlka, points out, government tends to be fixated on ensuring compliance with policies and procedures frequently at the expense of the people they serve. In other words, while government services may fulfill statutory and policy requirements, the speed, seamlessness, and simplicity in which that service is ultimately delivered to the end customer is oftentimes an afterthought.

There’s a chorus of voices in the growing public interest technology movement working to shift this stubborn paradigm to proactively and persistently center people at the heart of each interaction between government and the customer. In fact, Pennsylvania is part of a growing coalition of states transforming their digital services across the country. For Pennsylvania and so many states, the road to creating truly accessible digital services involves excavating a mountain of legacy systems and policies, changing cultural and organizational paradigms, and building a movement that puts people at the center of the problem…(More)”.

Overcoming the Challenges of Using Automated Technologies for Public Health Evidence Synthesis


Article by Lucy Hocking et al: “Many organisations struggle to keep pace with public health evidence due to the volume of published literature and length of time it takes to conduct literature reviews. New technologies that help automate parts of the evidence synthesis process can help conduct reviews more quickly and efficiently to better provide up-to-date evidence for public health decision making. To date, automated approaches have seldom been used in public health due to significant barriers to their adoption. In this Perspective, we reflect on the findings of a study exploring experiences of adopting automated technologies to conduct evidence reviews within the public health sector. The study, funded by the European Centre for Disease Prevention and Control, consisted of a literature review and qualitative data collection from public health organisations and researchers in the field. We specifically focus on outlining the challenges associated with the adoption of automated approaches and potential solutions and actions that can be taken to mitigate these. We explore these in relation to actions that can be taken by tool developers (e.g. improving tool performance and transparency), public health organisations (e.g. developing staff skills, encouraging collaboration) and funding bodies/the wider research system (e.g. researchers, funding bodies, academic publishers and scholarly journals)…(More)”

What causes such maddening bottlenecks in government? ‘Kludgeocracy.’


Article by Jennifer Pahlka: “Former president Donald Trump wants to “obliterate the deep state.” As a Democrat who values government, I am chilled by the prospect. But I sometimes partly agree with him.

Certainly, Trump and I are poles apart on the nature of the problem. His “deep state” evokes a shadowy cabal that doesn’t exist. What is true, however, is that red tape and misaligned gears frequently stymie progress on even the most straightforward challenges. Ten years ago, Steven M. Teles, a political science professor at Johns Hopkins University, coined the term “kludgeocracy” to describe the problem. Since then, it has only gotten worse.

Whatever you call it, the sprawling federal bureaucracy takes care of everything from the nuclear arsenal to the social safety net to making sure our planes don’t crash. Public servants do critical work; they should be honored, not disparaged.

Yet most of them are frustrated. I’ve spoken with staffers in a dozen federal agencies this year while rolling out my book about government culture and effectiveness. I heard over and over about rigid, maximalist interpretations of rules, regulations, policies and procedures that take precedence over mission. Too often acting responsibly in government has come to mean not acting at all.

Kludgeocracy Example No. 1: Within government, designers are working to make online forms and applications easier to use. To succeed, they need to do user research, most of which is supposed to be exempt from the data-collection requirements of the Paperwork Reduction Act. Yet compliance officers insist that designers send their research plans for approval by the White House Office of Information and Regulatory Affairs (OIRA) under the act. Countless hours can go into the preparation and internal approvals of a “package” for OIRA, which then might post the plans to the Federal Register for the fun-house-mirror purpose of collecting public input on a plan to collect public input. This can result in months of delay. Meanwhile, no input happens, and no paperwork gets reduced.

Kludgeocracy Example No. 2: For critical economic and national security reasons, Congress passed a law mandating the establishment of a center for scientific research. Despite clear legislative intent, work was bogged down for months when one agency applied a statute to prohibit a certain structure for the center and another applied a different statute to require that structure. The lawyers ultimately found a solution, but it was more complex and cumbersome than anyone had hoped for. All the while, the clock was ticking.

What causes such maddening bottlenecks? The problem is mainly one of culture and incentives. It could be solved if leaders in each branch — in good faith — took the costs seriously…(More)”.

Toward Equitable Innovation in Health and Medicine: A Framework 


Report by The National Academies: “Advances in biomedical science, data science, engineering, and technology are leading to high-pace innovation with potential to transform health and medicine. These innovations simultaneously raise important ethical and social issues, including how to fairly distribute their benefits and risks. The National Academies of Sciences, Engineering, and Medicine, in collaboration with the National Academy of Medicine, established the Committee on Creating a Framework for Emerging Science, Technology, and Innovation in Health and Medicine to provide leadership and engage broad communities in developing a framework for aligning the development and use of transformative technologies with ethical and equitable principles. The committees resulting report describes a governance framework for decisions throughout the innovation life cycle to advance equitable innovation and support an ecosystem that is more responsive to the needs of a broader range of individuals and is better able to recognize and address inequities as they arise…(More)”.