Reimagining the Request for Proposal


Article by Devon Davey, Heather Hiscox & Nicole Markwick : “In recent years, the social sector and the communities it serves have called for deep structural change to address our most serious social injustices. Yet one of the basic tools we use to fund change, the request for proposal (RFP), has remained largely unchanged. We believe that RFPs must become part of the larger call for systemic reform….

At first glance, the RFP process may seem neutral or fair. Yet RFPs are often designed by individuals in high-level positions without meaningful input from community members and frontline staff—those who are most familiar with social injustices and who often hold the least institutional power. What’s more, those who both issue and respond to RFPs often rely on their social capital to find and collaborate on RFP opportunities. Since social networks are highly homogeneous, RFP participation is limited to the professionals who have social connections to the issuer, resulting in a more limited pool of applicants.

This selection process is further compounded by the human propensity to hire people who look the same and who reflect similar ways of thinking. Social sector decision makers and power holders tend to be—among other identities—white. This lack of diversity, furthered by historical oppression, has ensured that white privilege and ways of working have come to dominate within the philanthropic and nonprofit sectors. This concentration of power and lack of diverse perspectives and experiences shaping RFPs results in projects failing to respond to the needs of communities and, in many cases, projects that directly perpetuate racism, colonialism, misogyny, ableism, sexism, and other forms of systemic and individual oppression.

The rigid structure of RFPs plays an important role in many of the negative outcomes of projects. Effective social change work is emergent, is iterative, and centers trust by nature. By contrast, RFPs frequently apply inflexible work scopes, limited timelines and budgets, and unproven solutions that are developed within the blinders of institutional power. Too often, funders force programs into implementation because they want to see results according to a specified plan. This rigidity can produce initiatives that are ineffective and removed from community needs. As consultant Joyce Lee-Ibarra says, “[RFPs] feel fundamentally transactional, when the work I want to do is relational.”…(More)”.

Imagining Governance for Emerging Technologies


Essay by Debra J.H. Mathews, Rachel Fabi and Anaeze C. Offodile: “…How should such technologies be regulated and governed? It is increasingly clear that past governance structures and strategies are not up to the task. What these technologies require is a new governance approach that accounts for their interdisciplinary impacts and potential for both good and ill at both the individual and societal level. 

To help lay the groundwork for a novel governance framework that will enable policymakers to better understand these technologies’ cross-sectoral footprint and anticipate and address the social, legal, ethical, and governance issues they raise, our team worked under the auspices of the National Academy of Medicine’s Committee on Emerging Science, Technology, and Innovation in health and medicine (CESTI) to develop an analytical approach to technology impacts and governance. The approach is grounded in detailed case studies—including the vignettes about Robyn and Liam—which have informed the development of a set of guiding principles (see sidebar).

Based on careful analysis of past governance, these case studies also contain a plausible vision of what might happen in the future. They illuminate ethical issues and help reveal governance tools and choices that could be crucial to delivering social benefits and reducing or avoiding harms. We believe that the approach taken by the committee will be widely applicable to considering the governance of emerging health technologies. Our methodology and process, as we describe here, may also be useful to a range of stakeholders involved in governance issues like these…(More)”.

Prediction machines, insurance, and protection: An alternative perspective on AI’s role in production


Paper by Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb: “Recent advances in AI represent improvements in prediction. We examine how decisionmaking and risk management strategies change when prediction improves. The adoption of AI may cause substitution away from risk management activities used when rules are applied (rules require always taking the same action), instead allowing for decisionmaking (choosing actions based on the predicted state). We provide a formal model evaluating the impact of AI and how risk management, stakes, and interrelated tasks affect AI adoption. The broad conclusion is that AI adoption can be stymied by existing processes designed to address uncertainty. In particular, many processes are designed to enable coordinated decisionmaking among different actors in an organization. AI can make coordination even more challenging. However, when the cost of changing such processes falls, then the returns from AI adoption increase….(More)”.

Against Progress: Intellectual Property and Fundamental Values in the Internet Age


Book by Jessica Silbey: “When first written into the Constitution, intellectual property aimed to facilitate “progress of science and the useful arts” by granting rights to authors and inventors. Today, when rapid technological evolution accompanies growing wealth inequality and political and social divisiveness, the constitutional goal of “progress” may pertain to more basic, human values, redirecting IP’s emphasis to the commonweal instead of private interests. Against Progress considers contemporary debates about intellectual property law as concerning the relationship between the constitutional mandate of progress and fundamental values, such as equality, privacy, and distributive justice, that are increasingly challenged in today’s internet age. Following a legal analysis of various intellectual property court cases, Jessica Silbey examines the experiences of everyday creators and innovators navigating ownership, sharing, and sustainability within the internet eco-system and current IP laws. Crucially, the book encourages refiguring the substance of “progress” and the function of intellectual property in terms that demonstrate the urgency of art and science to social justice today…(More)”.

Forecasting hospital-level COVID-19 admissions using real-time mobility data


Paper by Brennan Klein et al: “For each of the COVID-19 pandemic waves, hospitals have had to plan for deploying surge capacity and resources to manage large but transient increases in COVID-19 admissions. While a lot of effort has gone into predicting regional trends in COVID-19 cases and hospitalizations, there are far fewer successful tools for creating accurate hospital-level forecasts. At the same time, anonymized phone-collected mobility data proved to correlate well with the number of cases for the first two waves of the pandemic (spring 2020, and fall-winter 2021). In this work, we show how mobility data could bolster hospital-specific COVID-19 admission forecasts for five hospitals in Massachusetts during the initial COVID-19 surge. The high predictive capability of the model was achieved by combining anonymized, aggregated mobile device data about users’ contact patterns, commuting volume, and mobility range with COVID hospitalizations and test-positivity data. We conclude that mobility-informed forecasting models can increase the lead-time of accurate predictions for individual hospitals, giving managers valuable time to strategize how best to allocate resources to manage forthcoming surges…(More)”.

The linguistics search engine that overturned the federal mask mandate


Article by Nicole Wetsman: “The COVID-19 pandemic was still raging when a federal judge in Florida made the fateful decision to type “sanitation” into the search bar of the Corpus of Historical American English.

Many parts of the country had already dropped mask requirements, but a federal mask mandate on planes and other public transportation was still in place. A lawsuit challenging the mandate had come before Judge Kathryn Mizelle, a former clerk for Justice Clarence Thomas. The Biden administration said the mandate was valid, based on a law that authorizes the Centers for Disease Control and Prevention (CDC) to introduce rules around “sanitation” to prevent the spread of disease.

Mizelle took a textualist approach to the question — looking specifically at the meaning of the words in the law. But along with consulting dictionaries, she consulted a database of language, called a corpus, built by a Brigham Young University linguistics professor for other linguists. Pulling every example of the word “sanitation” from 1930 to 1944, she concluded that “sanitation” was used to describe actively making something clean — not as a way to keep something clean. So, she decided, masks aren’t actually “sanitation.”

The mask mandate was overturned, one of the final steps in the defanging of public health authorities, even as infectious disease ran rampant…

Using corpora to answer legal questions, a strategy often referred to as legal corpus linguistics, has grown increasingly popular in some legal circles within the past decade. It’s been used by judges on the Michigan Supreme Court and the Utah Supreme Court, and, this past March, was referenced by the US Supreme Court during oral arguments for the first time.

“It’s been growing rapidly since 2018,” says Kevin Tobia, a professor at Georgetown Law. “And it’s only going to continue to grow.”…(More)”.

Americans’ Views of Government: Decades of Distrust, Enduring Support for Its Role


Pew Research: “Americans remain deeply distrustful of and dissatisfied with their government. Just 20% say they trust the government in Washington to do the right thing just about always or most of the time – a sentiment that has changed very little since former President George W. Bush’s second term in office.

Chart shows low public trust in federal government has persisted for nearly two decades

The public’s criticisms of the federal government are many and varied. Some are familiar: Just 6% say the phrase “careful with taxpayer money” describes the federal government extremely or very well; another 21% say this describes the government somewhat well. A comparably small share (only 8%) describes the government as being responsive to the needs of ordinary Americans.

The federal government gets mixed ratings for its handling of specific issues. Evaluations are highly positive in some respects, including for responding to natural disasters (70% say the government does a good job of this) and keeping the country safe from terrorism (68%). However, only about a quarter of Americans say the government has done a good job managing the immigration system and helping people get out of poverty (24% each). And the share giving the government a positive rating for strengthening the economy has declined 17 percentage points since 2020, from 54% to 37%.

Yet Americans’ unhappiness with government has long coexisted with their continued support for government having a substantial role in many realms. And when asked how much the federal government does to address the concerns of various groups in the United States, there is a widespread belief that it does too little on issues affecting many of the groups asked about, including middle-income people (69%), those with lower incomes (66%) and retired people (65%)…(More)”.

Aligning Artificial Intelligence with Humans through Public Policy



Paper by John Nay and James Daily: “Given that Artificial Intelligence (AI) increasingly permeates our lives, it is critical that we systematically align AI objectives with the goals and values of humans. The human-AI alignment problem stems from the impracticality of explicitly specifying the rewards that AI models should receive for all the actions they could take in all relevant states of the world. One possible solution, then, is to leverage the capabilities of AI models to learn those rewards implicitly from a rich source of data describing human values in a wide range of contexts. The democratic policy-making process produces just such data by developing specific rules, flexible standards, interpretable guidelines, and generalizable precedents that synthesize citizens’ preferences over potential actions taken in many states of the world. Therefore, computationally encoding public policies to make them legible to AI systems should be an important part of a socio-technical approach to the broader human-AI alignment puzzle. Legal scholars are exploring AI, but most research has focused on how AI systems fit within existing law, rather than how AI may understand the law. This Essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks. As a demonstration of the ability of AI to comprehend policy, we provide a case study of an AI system that predicts the relevance of proposed legislation to a given publicly traded company and its likely effect on that company. We believe this represents the “comprehension” phase of AI and policy, but leveraging policy as a key source of human values to align AI requires “understanding” policy. We outline what we believe will be required to move toward that, and two example research projects in that direction. Solving the alignment problem is crucial to ensuring that AI is beneficial both individually (to the person or group deploying the AI) and socially. As AI systems are given increasing responsibility in high-stakes contexts, integrating democratically-determined policy into those systems could align their behavior with human goals in a way that is responsive to a constantly evolving society…(More)”.

In this small Va. town, citizens review police like Uber drivers


Article by Emily Davies: “Chris Ford stepped on the gas in his police cruiser and rolled down Gold Cup Drive to catch the SUV pushing 30 mph in a 15 mph zone. Eleven hours and 37 minutes into his shift, the corporal was ready for his first traffic stop of the day.

“Look at him being sneaky,” Fordsaid, his blue lights flashing on a quiet road in this small town where a busy day could mean animals escaped from a local slaughterhouse.

Ford parked, walked toward the SUV and greeted the man who had ignored the speed limit at exactly the wrong time.

“I was doing 15,” said the driver, a Black man in a mostly White neighborhood of a mostly White town.

The officertook his license and registration back to the cruiser.

“Every time I pull over someone of color, they’re standoffish with me. Like, ‘Here’s a White police officer, here we go again.’ ” Ford, 56, said. “So I just try to be nice.”

Ford knew the stop would be scrutinized — and not just by the reporter who was allowed to ride along on his shift.

After every significant encounter with residents, officers in Warrenton are required to hand out a QR code, which is on the back of their business card, asking for feedback on the interaction. Through a series of questions, citizens can use a star-based system to rate officers on their communication, listening skills and fairness. The responses are anonymous and can be completed any time after the interaction to encourage people to give honest assessments. The program, called Guardian Score, is supposed to give power to those stopped by police in a relationship that has historically felt one-sided — and to give police departments a tool to evaluate their force on more than arrests and tickets.

“If we started to measure how officers are treating community members, we realized we could actually infuse this into the overall evaluation process of individual officers,” said Burke Brownfeld, a founder of Guardian Score and a former police officer in Alexandria. “The definition of doing a good job could change. It would also include: How are your listening skills? How fairly are you treating people based on their perception?”…(More)”.

How harmful is social media?


Gideon Lewis-Kraus in The New Yorker: “In April, the social psychologist Jonathan Haidt published an essay in The Atlantic in which he sought to explain, as the piece’s title had it, “Why the Past 10 Years of American Life Have Been Uniquely Stupid.” Anyone familiar with Haidt’s work in the past half decade could have anticipated his answer: social media. Although Haidt concedes that political polarization and factional enmity long predate the rise of the platforms, and that there are plenty of other factors involved, he believes that the tools of virality—Facebook’s Like and Share buttons, Twitter’s Retweet function—have algorithmically and irrevocably corroded public life. He has determined that a great historical discontinuity can be dated with some precision to the period between 2010 and 2014, when these features became widely available on phones….

After Haidt’s piece was published, the Google Doc—“Social Media and Political Dysfunction: A Collaborative Review”—was made available to the public. Comments piled up, and a new section was added, at the end, to include a miscellany of Twitter threads and Substack essays that appeared in response to Haidt’s interpretation of the evidence. Some colleagues and kibbitzers agreed with Haidt. But others, though they might have shared his basic intuition that something in our experience of social media was amiss, drew upon the same data set to reach less definitive conclusions, or even mildly contradictory ones. Even after the initial flurry of responses to Haidt’s article disappeared into social-media memory, the document, insofar as it captured the state of the social-media debate, remained a lively artifact.

Near the end of the collaborative project’s introduction, the authors warn, “We caution readers not to simply add up the number of studies on each side and declare one side the winner.” The document runs to more than a hundred and fifty pages, and for each question there are affirmative and dissenting studies, as well as some that indicate mixed results. According to one paper, “Political expressions on social media and the online forum were found to (a) reinforce the expressers’ partisan thought process and (b) harden their pre-existing political preferences,” but, according to another, which used data collected during the 2016 election, “Over the course of the campaign, we found media use and attitudes remained relatively stable. Our results also showed that Facebook news use was related to modest over-time spiral of depolarization. Furthermore, we found that people who use Facebook for news were more likely to view both pro- and counter-attitudinal news in each wave. Our results indicated that counter-attitudinal exposure increased over time, which resulted in depolarization.” If results like these seem incompatible, a perplexed reader is given recourse to a study that says, “Our findings indicate that political polarization on social media cannot be conceptualized as a unified phenomenon, as there are significant cross-platform differences.”…(More)”.