A Blueprint for Designing Better Digital Government Services


Article by Joe Lee: “Public perceptions about government and government service delivery are at an all-time low across the United States. Plagued government legacy systems—too often using outdated programming language—are struggling to hold up under the weight of increased demand, and IT modernization efforts are floundering at all levels of government. This is taking place against the backdrop of a rapidly digitizing world that places a premium on speedy, seamless, simple, and secure customer service.

Government’s “customers” typically confront a whiplash experience between accessing services from the private sector and government. If a customer doesn’t like the quality of service they get from a particular business, they can usually turn to any number of competitors; that same customer has no viable alternative to a service provided by government, regardless of the quality of that service.

When Governor Josh Shapiro took office earlier this year in Pennsylvania, the start of a new administration presented an opportunity to reexamine how the Commonwealth of Pennsylvania delivered services for residents and visitors. As veteran government technologist, Jennifer Pahlka, points out, government tends to be fixated on ensuring compliance with policies and procedures frequently at the expense of the people they serve. In other words, while government services may fulfill statutory and policy requirements, the speed, seamlessness, and simplicity in which that service is ultimately delivered to the end customer is oftentimes an afterthought.

There’s a chorus of voices in the growing public interest technology movement working to shift this stubborn paradigm to proactively and persistently center people at the heart of each interaction between government and the customer. In fact, Pennsylvania is part of a growing coalition of states transforming their digital services across the country. For Pennsylvania and so many states, the road to creating truly accessible digital services involves excavating a mountain of legacy systems and policies, changing cultural and organizational paradigms, and building a movement that puts people at the center of the problem…(More)”.

Overcoming the Challenges of Using Automated Technologies for Public Health Evidence Synthesis


Article by Lucy Hocking et al: “Many organisations struggle to keep pace with public health evidence due to the volume of published literature and length of time it takes to conduct literature reviews. New technologies that help automate parts of the evidence synthesis process can help conduct reviews more quickly and efficiently to better provide up-to-date evidence for public health decision making. To date, automated approaches have seldom been used in public health due to significant barriers to their adoption. In this Perspective, we reflect on the findings of a study exploring experiences of adopting automated technologies to conduct evidence reviews within the public health sector. The study, funded by the European Centre for Disease Prevention and Control, consisted of a literature review and qualitative data collection from public health organisations and researchers in the field. We specifically focus on outlining the challenges associated with the adoption of automated approaches and potential solutions and actions that can be taken to mitigate these. We explore these in relation to actions that can be taken by tool developers (e.g. improving tool performance and transparency), public health organisations (e.g. developing staff skills, encouraging collaboration) and funding bodies/the wider research system (e.g. researchers, funding bodies, academic publishers and scholarly journals)…(More)”

What causes such maddening bottlenecks in government? ‘Kludgeocracy.’


Article by Jennifer Pahlka: “Former president Donald Trump wants to “obliterate the deep state.” As a Democrat who values government, I am chilled by the prospect. But I sometimes partly agree with him.

Certainly, Trump and I are poles apart on the nature of the problem. His “deep state” evokes a shadowy cabal that doesn’t exist. What is true, however, is that red tape and misaligned gears frequently stymie progress on even the most straightforward challenges. Ten years ago, Steven M. Teles, a political science professor at Johns Hopkins University, coined the term “kludgeocracy” to describe the problem. Since then, it has only gotten worse.

Whatever you call it, the sprawling federal bureaucracy takes care of everything from the nuclear arsenal to the social safety net to making sure our planes don’t crash. Public servants do critical work; they should be honored, not disparaged.

Yet most of them are frustrated. I’ve spoken with staffers in a dozen federal agencies this year while rolling out my book about government culture and effectiveness. I heard over and over about rigid, maximalist interpretations of rules, regulations, policies and procedures that take precedence over mission. Too often acting responsibly in government has come to mean not acting at all.

Kludgeocracy Example No. 1: Within government, designers are working to make online forms and applications easier to use. To succeed, they need to do user research, most of which is supposed to be exempt from the data-collection requirements of the Paperwork Reduction Act. Yet compliance officers insist that designers send their research plans for approval by the White House Office of Information and Regulatory Affairs (OIRA) under the act. Countless hours can go into the preparation and internal approvals of a “package” for OIRA, which then might post the plans to the Federal Register for the fun-house-mirror purpose of collecting public input on a plan to collect public input. This can result in months of delay. Meanwhile, no input happens, and no paperwork gets reduced.

Kludgeocracy Example No. 2: For critical economic and national security reasons, Congress passed a law mandating the establishment of a center for scientific research. Despite clear legislative intent, work was bogged down for months when one agency applied a statute to prohibit a certain structure for the center and another applied a different statute to require that structure. The lawyers ultimately found a solution, but it was more complex and cumbersome than anyone had hoped for. All the while, the clock was ticking.

What causes such maddening bottlenecks? The problem is mainly one of culture and incentives. It could be solved if leaders in each branch — in good faith — took the costs seriously…(More)”.

Toward Equitable Innovation in Health and Medicine: A Framework 


Report by The National Academies: “Advances in biomedical science, data science, engineering, and technology are leading to high-pace innovation with potential to transform health and medicine. These innovations simultaneously raise important ethical and social issues, including how to fairly distribute their benefits and risks. The National Academies of Sciences, Engineering, and Medicine, in collaboration with the National Academy of Medicine, established the Committee on Creating a Framework for Emerging Science, Technology, and Innovation in Health and Medicine to provide leadership and engage broad communities in developing a framework for aligning the development and use of transformative technologies with ethical and equitable principles. The committees resulting report describes a governance framework for decisions throughout the innovation life cycle to advance equitable innovation and support an ecosystem that is more responsive to the needs of a broader range of individuals and is better able to recognize and address inequities as they arise…(More)”.

The battle over right to repair is a fight over your car’s data


Article by Ofer Tur-Sinai: “Cars are no longer just a means of transportation. They have become rolling hubs of data communication. Modern vehicles regularly transmit information wirelessly to their manufacturers.

However, as cars grow “smarter,” the right to repair them is under siege.

As legal scholars, we find that the question of whether you and your local mechanic can tap into your car’s data to diagnose and repair spans issues of property rights, trade secrets, cybersecurity, data privacy and consumer rights. Policymakers are forced to navigate this complex legal landscape and ideally are aiming for a balanced approach that upholds the right to repair, while also ensuring the safety and privacy of consumers…

Until recently, repairing a car involved connecting to its standard on-board diagnostics port to retrieve diagnostic data. The ability for independent repair shops – not just those authorized by the manufacturer – to access this information was protected by a state law in Massachusetts, approved by voters on Nov. 6, 2012, and by a nationwide memorandum of understanding between major car manufacturers and the repair industry signed on Jan. 15, 2014.

However, with the rise of telematics systems, which combine computing with telecommunications, these dynamics are shifting. Unlike the standardized onboard diagnostics ports, telematics systems vary across car manufacturers. These systems are often protected by digital locks, and circumventing these locks could be considered a violation of copyright law. The telematics systems also encrypt the diagnostic data before transmitting it to the manufacturer.

This reduces the accessibility of telematics information, potentially locking out independent repair shops and jeopardizing consumer choice – a lack of choice that can lead to increased costs for consumers….

One issue left unresolved by the legislation is the ownership of vehicle data. A vehicle generates all sorts of data as it operates, including location, diagnostic, driving behavior, and even usage patterns of in-car systems – for example, which apps you use and for how long.

In recent years, the question of data ownership has gained prominence. In 2015, Congress legislated that the data stored in event data recorders belongs to the vehicle owner. This was a significant step in acknowledging the vehicle owner’s right over specific datasets. However, the broader issue of data ownership in today’s connected cars remains unresolved…(More)”.

Democratic Policy Development using Collective Dialogues and AI


Paper by Andrew Konya, Lisa Schirch, Colin Irwin, Aviv Ovadya: “We design and test an efficient democratic process for developing policies that reflect informed public will. The process combines AI-enabled collective dialogues that make deliberation democratically viable at scale with bridging-based ranking for automated consensus discovery. A GPT4-powered pipeline translates points of consensus into representative policy clauses from which an initial policy is assembled. The initial policy is iteratively refined with the input of experts and the public before a final vote and evaluation. We test the process three times with the US public, developing policy guidelines for AI assistants related to medical advice, vaccine information, and wars & conflicts. We show the process can be run in two weeks with 1500+ participants for around $10,000, and that it generates policy guidelines with strong public support across demographic divides. We measure 75-81% support for the policy guidelines overall, and no less than 70-75% support across demographic splits spanning age, gender, religion, race, education, and political party. Overall, this work demonstrates an end-to-end proof of concept for a process we believe can help AI labs develop common-ground policies, governing bodies break political gridlock, and diplomats accelerate peace deals…(More)”.

Assessing and Suing an Algorithm


Report by Elina Treyger, Jirka Taylor, Daniel Kim, and Maynard A. Holliday: “Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?

Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers’ understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment…(More)”.

Can Large Language Models Capture Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias


Paper by S. Lee et all: “Large language models (LLMs) have demonstrated their potential in social science research by emulating human perceptions and behaviors, a concept referred to as algorithmic fidelity. This study assesses the algorithmic fidelity and bias of LLMs by utilizing two nationally representative climate change surveys. The LLMs were conditioned on demographics and/or psychological covariates to simulate survey responses. The findings indicate that LLMs can effectively capture presidential voting behaviors but encounter challenges in accurately representing global warming perspectives when relevant covariates are not included. GPT-4 exhibits improved performance when conditioned on both demographics and covariates. However, disparities emerge in LLM estimations of the views of certain groups, with LLMs tending to underestimate worry about global warming among Black Americans. While highlighting the potential of LLMs to aid social science research, these results underscore the importance of meticulous conditioning, model selection, survey question format, and bias assessment when employing LLMs for survey simulation. Further investigation into prompt engineering and algorithm auditing is essential to harness the power of LLMs while addressing their inherent limitations…(More)”.

Can Indigenous knowledge and Western science work together? New center bets yes


Article by Jeffrey Mervis: “For millennia, the Passamaquoddy people used their intimate understanding of the coastal waters along the Gulf of Maine to sustainably harvest the ocean’s bounty. Anthropologist Darren Ranco of the University of Maine hoped to blend their knowledge of tides, water temperatures, salinity, and more with a Western approach in a project to study the impact of coastal pollution on fish, shellfish, and beaches.

But the Passamaquoddy were never really given a seat at the table, says Ranco, a member of the Penobscot Nation, which along with the Passamaquoddy are part of the Wabanaki Confederacy of tribes in Maine and eastern Canada. The Passamaquoddy thought water quality and environmental protection should be top priority; the state emphasized forecasting models and monitoring. “There was a disconnect over who were the decision-makers, what knowledge would be used in making decisions, and what participation should look like,” Ranco says about the 3-year project, begun in 2015 and funded by the National Science Foundation (NSF).

Last month, NSF aimed to bridge such disconnects, with a 5-year, $30 million grant designed to weave together traditional ecological knowledge (TEK) and Western science. Based at the University of Massachusetts (UMass) Amherst, the Center for Braiding Indigenous Knowledges and Science (CBIKS) aims to fundamentally change the way scholars from both traditions select and carry out joint research projects and manage data…(More)”.

A Feasibility Study of Differentially Private Summary Statistics and Regression Analyses with Evaluations on Administrative and Survey Data


Report by Andrés F. Barrientos, Aaron R. Williams, Joshua Snoke, Claire McKay Bowen: “Federal administrative data, such as tax data, are invaluable for research, but because of privacy concerns, access to these data is typically limited to select agencies and a few individuals. An alternative to sharing microlevel data is to allow individuals to query statistics without directly accessing the confidential data. This paper studies the feasibility of using differentially private (DP) methods to make certain queries while preserving privacy. We also include new methodological adaptations to existing DP regression methods for using new data types and returning standard error estimates. We define feasibility as the impact of DP methods on analyses for making public policy decisions and the queries accuracy according to several utility metrics. We evaluate the methods using Internal Revenue Service data and public-use Current Population Survey data and identify how specific data features might challenge some of these methods. Our findings show that DP methods are feasible for simple, univariate statistics but struggle to produce accurate regression estimates and confidence intervals. To the best of our knowledge, this is the first comprehensive statistical study of DP regression methodology on real, complex datasets, and the findings have significant implications for the direction of a growing research field and public policy…(More)”.