Unmet Desire


Essay by I vividly remember March 2020, the month the United States shut down as COVID-19 spread uncontrollably and upended daily life. At the time, I worked at Cornell University in upstate New York. As we adjusted to a new normal, my Cornell colleague Elizabeth Day and I suspected that local leaders were facing unprecedented policy challenges that were not making the major headlines.

We decided to reach out to county policymakers throughout upstate New York, inviting them to share challenges they were facing. We offered to discuss research that might prove helpful. Responses soon poured in.

One county executive was trying to figure out how to provide childcare for first responders. Childcare centers were ordered closed, but first responders could not stay home to watch their kids. The executive needed systematic research on other options. A second local policymaker watched as her county’s offices shuttered and work moved online; she needed research on how other local leaders had used mobile vans to provide necessary services to rural residents without internet. Another county official sought to design a high-quality survey to elicit frank responses from municipal leaders about COVID-related challenges. In this case, she needed to discuss the fundamentals of survey design and implementation with an expert.

These responses led us to engage in an informal collaboration with each of these policymakers. By informal collaboration, I mean a collaborative exchange in which people with diverse forms of knowledge, expertise, and lived experience share what they know with the goal of developing an expanded understanding of a problem—yet still remain autonomous decisionmakers. In these cases, we as researchers brought knowledge about policy analysis and survey fundamentals, and the policymakers brought detailed knowledge about their present needs, local context, and historical challenges. All this diverse information was crucial to chart a way forward that was informed by evidence.

Yet it turns out our interactions were highly unusual. During our conversations, all the policymakers revealed that researchers from colleges and universities in their immediate area had never reached out in this way, and that they had no regular communication with local researchers.

This disconnect is a problem. Local policymakers are responsible for almost $2 trillion of spending annually, and they oversee many areas in which technical knowledge is essential, such as promoting economic development, building and maintaining roads, educating children, policing, fighting fires, determining acceptable land use, and providing public transportation…(More)”.

The Accountable Bureaucrat


Paper by Anya Bernstein and Cristina Rodriguez: “Common wisdom has it that, without close supervision by an elected official, administrative agencies are left unaccountable to the people they regulate. For both proponents and detractors of the administrative state, agency accountability thus hangs on the concentrated power of the President. This Article presents a different vision. Drawing on in-depth interviews with officials from numerous agencies, we argue that everyday administrative practices themselves support accountability—an accountability of a kind that elections alone cannot achieve. The electoral story focuses on the aspect of accountability that kicks in as a sanction after decisions have already been made. We propose instead that the ongoing justification of policy positions to multiple audiences empowered to evaluate and challenge them forms the heart of accountability in a republican democracy. The continual process of reason-giving, testing, and adaptation instantiates the values that make accountability normatively attractive: deliberation, inclusivity, and responsiveness.

Our interviews reveal three primary features of the administrative state that support such accountability. First, political appointees and career civil servants, often presented as conflictual, actually enact complementary decisionmaking modalities. Appointees do not impose direct presidential control but imbue agencies with a diffuse, differentiated sense of abstract political values. Civil servants use expertise and experience to set the parameters within which decisions can be made. The process of moving these differing but interdependent approaches toward a decision promotes deliberation. Second, agencies work through a networked spiderweb of decisionmaking that involves continual justification and negotiation among numerous groups. This claim stands in stark contrast to the strict hierarchy often attributed to government bureaucracy: we show how the principal-agent model, frequently used to analyze agencies, obscures more than it reveals. The dispersion of decisionmaking power, we claim, promotes pluralistic inclusivity and provides more support for ongoing accountability than a concentration in presidential hands would. Finally, many two-way avenues connect agencies to the people and situations they regulate. Those required by law, like notice-and-comment rulemaking, supplement numerous other interaction formats that agencies create. These multiple avenues support agency responsiveness to the views of affected publics and the realities of the regulated world….(More)”.

Opening Up to Open Science


Essay by Chelle Gentemann, Christopher Erdmann and Caitlin Kroeger: “The modern Hippocratic Oath outlines ethical standards that physicians worldwide swear to uphold. “I will respect the hard-won scientific gains of those physicians in whose steps I walk,” one of its tenets reads, “and gladly share such knowledge as is mine with those who are to follow.”

But what form, exactly, should knowledge-sharing take? In the practice of modern science, knowledge in most scientific disciplines is generally shared through peer-reviewed publications at the end of a project. Although publication is both expected and incentivized—it plays a key role in career advancement, for example—many scientists do not take the extra step of sharing data, detailed methods, or code, making it more difficult for others to replicate, verify, and build on their results. Even beyond that, professional science today is full of personal and institutional incentives to hold information closely to retain a competitive advantage.

This way of sharing science has some benefits: peer review, for example, helps to ensure (even if it never guarantees) scientific integrity and prevent inadvertent misuse of data or code. But the status quo also comes with clear costs: it creates barriers (in the form of publication paywalls), slows the pace of innovation, and limits the impact of research. Fast science is increasingly necessary, and with good reason. Technology has not only improved the speed at which science is carried out, but many of the problems scientists study, from climate change to COVID-19, demand urgency. Whether modeling the behavior of wildfires or developing a vaccine, the need for scientists to work together and share knowledge has never been greater. In this environment, the rapid dissemination of knowledge is critical; closed, siloed knowledge slows progress to a degree society cannot afford. Imagine the consequences today if, as in the 2003 SARS disease outbreak, the task of sequencing genomes still took months and tools for labs to share the results openly online didn’t exist. Today’s challenges require scientists to adapt and better recognize, facilitate, and reward collaboration.

Open science is a path toward a collaborative culture that, enabled by a range of technologies, empowers the open sharing of data, information, and knowledge within the scientific community and the wider public to accelerate scientific research and understanding. Yet despite its benefits, open science has not been widely embraced…(More)”

Beyond ‘X Number Served’


Essay by Mona Mourshed: “Metrics matter, but they should always be plural. Focus on the speedometer, ignore the gas gauge, and you’re sure to stop short of your destination. But while the plague of metric monomania can occasionally be an issue in business, it’s an even bigger problem within the social sector. After all, market discipline forces business leaders to weigh tradeoffs between costs and sales, or between product quality and service level speed. Multiple metrics help executives get the balance right, even as they scale.

By contrast, nonprofits too often receive (well-intended) guidance from stakeholders like funders and board members to disproportionately zero in on a single goal: serving the maximum number of beneficiaries. That’s a perfectly understandable impulse, of course. But it confuses scale with just one impact dimension, reach. “We have to recognize that a higher number does not necessarily indicate transformation,” says Lisha McCormick, CEO of Last Mile Health, which supports countries in building strong community health systems. “Higher reach alone does not equate to impact.”

This is a problem because excessively defining and valuing programs by the number of people they serve can give rise to unintended consequences. Nonprofit leaders can find themselves discussing how to serve more people through “lighter touch” models or debating ambiguous metrics like “reached” or “touched” to expand participant numbers (while fighting uneasiness about the potential adverse implications for program quality)…(More)”.

Guns, Privacy, and Crime


Paper by Alessandro Acquisti & Catherine Tucker: “Open government holds promise of both a more efficient but more accountable and transparent government. It is not clear, however, how transparent information about citizens and their interaction with government, however, affects the welfare of those citizens, and if so in what direction. We investigate this by using as a natural experiment the effect of the online publication of the names and addresses of holders of handgun carry permits on criminals’ propensity to commit burglaries. In December 2008, a Memphis, TN newspaper published a searchable online database of names, zip codes, and ages of Tennessee handgun carry permit holders. We use detailed crime and handgun carry permit data for the city of Memphis to estimate the impact of publicity about the database on burglaries. We find that burglaries increased in zip codes with fewer gun permits, and decreased in those with more gun permits, after the database was publicized….(More)”

AI & Society


Special Issue of Daedalus edited by James Manyika: “AI is transforming our relationships with technology and with others, our senses of self, as well as our approaches to health care, banking, democracy, and the courts. But while AI in its many forms has become ubiquitous and its benefits to society and the individual have grown, its impacts are varied. Concerns about its unintended effects and misuses have become paramount in conversations about the successful integration of AI in society. This volume explores the many facets of artificial intelligence: its technology, its potential futures, its effects on labor and the economy, its relationship with inequalities, its role in law and governance, its challenges to national security, and what it says about us as humans…(More)” See also https://aiethicscourse.org/

Governance of the Inconceivable


Essay by Lisa Margonelli: “How do scientists and policymakers work together to design governance for technologies that come with evolving and unknown risks? In the Winter 1985 Issues, seven experts reflected on the possibility of a large nuclear conflict triggering a “nuclear winter.” These experts agreed that the consequences would be horrifying: even beyond radiation effects, for example, burning cities could put enough smoke in the atmosphere to block sunlight, lowering ground temperatures and threatening people, crops, and other living things. In the same issue, former astronaut and then senator John Glenn wrote about the prospects for several nuclear nonproliferation agreements he was involved in negotiating. This broad discussion of nuclear weapons governance in Issues—involving legislators Glenn and then senator Al Gore as well as scientists, Department of Defense officials, and weapons designers—reflected the discourse of the time. In the culture at large, fears of nuclear annihilation became ubiquitous, and today you can easily find danceable playlists containing “38 Essential ’80s Songs About Nuclear Anxiety.”

But with the end of the Cold War, the breakup of the Soviet Union, and the rapid growth of a globalized economy and culture, these conversations receded from public consciousness. Issues has not run an article on nuclear weapons since 2010, when an essay argued that exaggerated fear of nuclear weapons had led to poor policy decisions. “Albert Einstein memorably proclaimed that nuclear weapons ‘have changed everything except our way of thinking,’” wrote political scientist John Mueller. “But the weapons actually seem to have changed little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money.”

All these old conversations suddenly became relevant again as our editorial team worked on this issue. On February 27, when Vladimir Putin ordered Russia’s nuclear weapons put on “high alert” after invading Ukraine, United Nations Secretary-General Antonio Guterres declared that “the mere idea of a nuclear conflict is simply unconceivable.” But, in the space of a day, what had long seemed inconceivable was suddenly being very actively conceived….(More)”.

How Smart Tech Tried to Solve the Mental Health Crisis and Only Made It Worse


Article by Emma Bedor Hiland: “Crisis Text Line was supposed to be the exception. Skyrocketing rates of depression, anxiety, and mental distress over the last decade demanded new, innovative solutions. The non-profit organization was founded in 2013 with the mission of providing free mental health text messaging services and crisis intervention tools. It seemed like the right moment to use technology to make the world a better place. Over the following years, the accolades and praise the platform received reflected its success. But their sterling reputation was tarnished overnight at the beginning of 2022 when Politico published an investigation into the way Crisis Text Line had handled and shared user data. The problem with the organization, however, goes well beyond its alleged mishandling of user information.

Despite Crisis Text Line’s assurance that its platform was anonymous, Politico’s January report showed that the company’s private messaging sessions were not actually anonymous. Data about users, including what they shared with Crisis Text Line’s volunteers, had been provided and sold to an entirely different company called Loris.ai, a tech startup that specializes in artificial intelligence software for human resources and customer service. The report brought to light a troubling relationship between the two organizations. Both had previously been headed by the same CEO, Nancy Lublin. In 2019, however, Lublin had stepped down from Loris, and in 2020 Crisis Text Line’s board ousted her following allegations that she had engaged in workplace racism.

But the troubles that enveloped Crisis Text Line can’t be blamed on one bad apple. Crisis Text Line’s board of directors had approved the relationship between the entities. In the technology and big data sectors, commodification of user data is fundamental to a platform or toolset’s economic survival, and by sharing data with Loris.ai, Crisis Text Line was able to provide needed services. The harsh reality revealed by the Politico report was that even mental healthcare is not immune from commodification, despite the risks of aggregating and sharing information about experiences and topics which continue to be stigmatized.

In the case of the Crisis Text Line-Loris.ai partnership, Loris used the nonprofit’s data to improve its own, for-profit development of machine learning algorithms sold to corporations and governments. Although Crisis Text Line maintains that all of the data shared with Loris was anonymized, the transactional nature of the relationship between the two was still fundamentally an economic one. As the Loris.ai website states, “Crisis Text Line is a Loris shareholder. Our success offers material benefit to CTL, helping this non-profit organization continue its important work. We believe this model is a blueprint for ways for-profit companies can infuse social good into their culture and operations, and for nonprofits to prosper.”…(More)”.

A.I. Is Mastering Language. Should We Trust What It Says?


Steven Johnson at the New York Times: “You are sitting in a comfortable chair by the fire, on a cold winter’s night. Perhaps you have a mug of tea in hand, perhaps something stronger. You open a magazine to an article you’ve been meaning to read. The title suggested a story about a promising — but also potentially dangerous — new technology on the cusp of becoming mainstream, and after reading only a few sentences, you find yourself pulled into the story. A revolution is coming in machine intelligence, the author argues, and we need, as a society, to get better at anticipating its consequences. But then the strangest thing happens: You notice that the writer has, seemingly deliberately, omitted the very last word of the first .

The missing word jumps into your consciousness almost unbidden: ‘‘the very last word of the first paragraph.’’ There’s no sense of an internal search query in your mind; the word ‘‘paragraph’’ just pops out. It might seem like second nature, this filling-in-the-blank exercise, but doing it makes you think of the embedded layers of knowledge behind the thought. You need a command of the spelling and syntactic patterns of English; you need to understand not just the dictionary definitions of words but also the ways they relate to one another; you have to be familiar enough with the high standards of magazine publishing to assume that the missing word is not just a typo, and that editors are generally loath to omit key words in published pieces unless the author is trying to be clever — perhaps trying to use the missing word to make a point about your cleverness, how swiftly a human speaker of English can conjure just the right word.

Before you can pursue that idea further, you’re back into the article, where you find the author has taken you to a building complex in suburban Iowa. Inside one of the buildings lies a wonder of modern technology: 285,000 CPU cores yoked together into one giant supercomputer, powered by solar arrays and cooled by industrial fans. The machines never sleep: Every second of every day, they churn through innumerable calculations, using state-of-the-art techniques in machine intelligence that go by names like ‘‘stochastic gradient descent’’ and ‘‘convolutional neural networks.’’ The whole system is believed to be one of the most powerful supercomputers on the planet.

And what, you may ask, is this computational dynamo doing with all these prodigious resources? Mostly, it is playing a kind of game, over and over again, billions of times a second. And the game is called: Guess what the missing word is.…(More)”.

Police surveillance and facial recognition: Why data privacy is an imperative for communities of color


Paper by Nicol Turner Lee and Caitlin Chin: “Governments and private companies have a long history of collecting data from civilians, often justifying the resulting loss of privacy in the name of national security, economic stability, or other societal benefits. But it is important to note that these trade-offs do not affect all individuals equally. In fact, surveillance and data collection have disproportionately affected communities of color under both past and current circumstances and political regimes.

From the historical surveillance of civil rights leaders by the Federal Bureau of Investigation (FBI) to the current misuse of facial recognition technologies, surveillance patterns often reflect existing societal biases and build upon harmful and virtuous cycles. Facial recognition and other surveillance technologies also enable more precise discrimination, especially as law enforcement agencies continue to make misinformed, predictive decisions around arrest and detainment that disproportionately impact marginalized populations.

In this paper, we present the case for stronger federal privacy protections with proscriptive guardrails for the public and private sectors to mitigate the high risks that are associated with the development and procurement of surveillance technologies. We also discuss the role of federal agencies in addressing the purposes and uses of facial recognition and other monitoring tools under their jurisdiction, as well as increased training for state and local law enforcement agencies to prevent the unfair or inaccurate profiling of people of color. We conclude the paper with a series of proposals that lean either toward clear restrictions on the use of surveillance technologies in certain contexts, or greater accountability and oversight mechanisms, including audits, policy interventions, and more inclusive technical designs….(More)”