Examining the Intersection of Behavioral Science and Advocacy


Introduction to Special Collection of the Behavioral Scientist by Cintia Hinojosa and Evan Nesterak: “Over the past year, everyone’s lives have been touched by issues that intersect science and advocacy—the pandemic, climate change, police violence, voting, protests, the list goes on. 

These issues compel us, as a society and individuals, toward understanding. We collect new data, design experiments, test our theories. They also inspire us to examine our personal beliefs and values, our roles and responsibilities as individuals within society. 

Perhaps no one feels these forces more than social and behavioral scientists. As members of fields dedicated to the study of social and behavioral phenomena, they are in the unique position of understanding these issues from a scientific perspective, while also navigating their inevitable personal impact. This dynamic brings up questions about the role of scientists in a changing world. To what extent should they engage in advocacy or activism on social and political issues? Should they be impartial investigators, active advocates, something in between? 

t also raises other questions, like does taking a public stance on an issue affect scientific integrity? How should scientists interact with those setting policies? What happens when the lines between an evidence-based stance and a political position become blurred? What should scientists do when science itself becomes a partisan issue? 

To learn more about how social and behavioral scientists are navigating this terrain, we put out a call inviting them to share their ideas, observations, personal reflections, and the questions they’re grappling with. We gave them 100-250 words to share what was on their mind. Not easy for such a complex and consequential topic.

The responses, collected and curated below, revealed a number of themes, which we’ve organized into two parts….(More)”.

Mass, Computer-Generated, and Fraudulent Comments


Report by Steven J. Balla et al: “This report explores three forms of commenting in federal rulemaking that have been enabled by technological advances: mass, fraudulent, and computer-generated comments. Mass comments arise when an agency receives a much larger number of comments in a rulemaking than it typically would (e.g., thousands when the agency typically receives a few dozen). The report focuses on a particular type of mass comment response, which it terms a “mass comment campaign,” in which organizations orchestrate the submission of large numbers of identical or nearly identical comments. Fraudulent comments, which we refer to as “malattributed comments” as discussed below, refer to comments falsely attributed to persons by whom they were not, in fact, submitted. Computer-generated comments are generated not by humans, but rather by software algorithms. Although software is the product of human actions, algorithms obviate the need for humans to generate the content of comments and submit comments to agencies.

This report examines the legal, practical, and technical issues associated with processing and responding to mass, fraudulent, and computer-generated comments. There are cross-cutting issues that apply to each of these three types of comments. First, the nature of such comments may make it difficult for agencies to extract useful information. Second, there are a suite of risks related to harming public perceptions about the legitimacy of particular rules and the rulemaking process overall. Third, technology-enabled comments present agencies with resource challenges.

The report also considers issues that are unique to each type of comment. With respect to mass comments, it addresses the challenges associated with receiving large numbers of comments and, in particular, batches of comments that are identical or nearly identical. It looks at how agencies can use technologies to help process comments received and at how agencies can most effectively communicate with public commenters to ensure that they understand the purpose of the notice-and-comment process and the particular considerations unique to processing mass comment responses. Fraudulent, or malattributed, comments raise legal issues both in criminal and Administrative Procedure Act (APA) domains. They also have the potential to mislead an agency and pose harms to individuals. Computer-generated comments may raise legal issues in light of the APA’s stipulation that “interested persons” are granted the opportunity to comment on proposed rules. Practically, it can be difficult for agencies to distinguish computer-generated comments from traditional comments (i.e., those submitted by humans without the use of software algorithms).

While technology creates challenges, it also offers opportunities to help regulatory officials gather public input and draw greater insights from that input. The report summarizes several innovative forms of public participation that leverage technology to supplement the notice and comment rulemaking process.

The report closes with a set of recommendations for agencies to address the challenges and opportunities associated with new technologies that bear on the rulemaking process. These recommendations cover steps that agencies can take with respect to technology, coordination, and docket management….(More)”.

Sandwich Strategy


Article by the Accountability Research Center: “The “sandwich strategy” describes an interactive process in which reformers in government encourage citizen action from below, driving virtuous circles of mutual empowerment between pro-accountability actors in both state and society.

The sandwich strategy relies on mutually-reinforcing interaction between pro-reform actors in both state and society, not just initiatives from one or the other arena. The hypothesis is that when reformers in government tangibly reduce the risks/costs of collective action, that process can bolster state-society pro-reform coalitions that collaborate for change. While this process makes intuitive sense, it can follow diverse pathways and encounter many roadblocks. The dynamics, strengths and limitations of sandwich strategies have not been documented and analyzed systematically. The figure below shows a possible pathway of convergence and conflict between actors for and against change in both state and society….(More)”.

sandwich strategy

Be Skeptical of Thought Leaders


Book Review by Evan Selinger: “Corporations regularly advertise their commitment to “ethics.” They often profess to behave better than the law requires and sometimes may even claim to make the world a better place. Google, for example, trumpets its commitment to “responsibly” developing artificial intelligence and swears it follows lofty AI principles that include being “socially beneficial” and “accountable to people,” and that “avoid creating or reinforcing unfair bias.”

Google’s recent treatment of Timnit Gebru, the former co-leader of its ethical AI team, tells another story. After Gebru went through an antagonistic internal review process for a co-authored paper that explores social and environmental risks and expressed concern over justice issues within Google, the company didn’t congratulate her for a job well done. Instead, she and vocally supportive colleague Margaret Mitchell (the other co-leader) were “forced out.” Google’s behavior “perhaps irreversibly damaged” the company’s reputation. It was hard not to conclude that corporate values misalign with the public good.

Even as tech companies continue to display hypocrisy, there might still be good reasons to have high hopes for their behavior in the future. Suppose corporations can do better than ethics washingvirtue signaling, and making incremental improvements that don’t challenge aggressive plans for financial growth. If so, society desperately needs to know what it takes to bring about dramatic change. On paper, Susan Liautaud is the right person to turn to for help. She has impressive academic credentials (a PhD in Social Policy from the London School of Economics and a JD from Columbia University Law School), founded and manages an ethics consulting firm with an international reach, and teaches ethics courses at Stanford University.

In The Power of Ethics: How to Make Good Choices in a Complicated World, Liautaud pursues a laudable goal: democratize the essential practical steps for making responsible decisions in a confusing and complex world. While the book is pleasantly accessible, it has glaring faults. With so much high-quality critical journalistic coverage of technologies and tech companies, we should expect more from long-form analysis.

Although ethics is more widely associated with dour finger-waving than aspirational world-building, Liautaud mostly crafts an upbeat and hopeful narrative, albeit not so cheerful that she denies the obvious pervasiveness of shortsighted mistakes and blatant misconduct. The problem is that she insists ethical values and technological development pair nicely. Big Tech might be exerting increasing control over our lives, exhibiting an oversized influence on public welfare through incursions into politics, education, social communication, space travel, national defense, policing, and currency — but this doesn’t in the least quell her enthusiasm, which remains elevated enough throughout her book to affirm the power of the people. Hyperbolically, she declares, “No matter where you stand […] you have the opportunity to prevent the monopolization of ethics by rogue actors, corporate giants, and even well-intentioned scientists and innovators.”…(More)“.

Living in Data: A Citizen’s Guide to a Better Information Future


Book by Jer Thorp: “To live in data in the twenty-first century is to be incessantly extracted from, classified and categorized, statisti-fied, sold, and surveilled. Data—our data—is mined and processed for profit, power, and political gain. In Living in Data, Thorp asks a crucial question of our time: How do we stop passively inhabiting data, and instead become active citizens of it?

Threading a data story through hippo attacks, glaciers, and school gymnasiums, around colossal rice piles, and over active minefields, Living in Data reminds us that the future of data is still wide open, that there are ways to transcend facts and figures and to find more visceral ways to engage with data, that there are always new stories to be told about how data can be used.

Punctuated with Thorp’s original and informative illustrations, Living in Data not only redefines what data is, but reimagines who gets to speak its language and how to use its power to create a more just and democratic future. Timely and inspiring, Living in Data gives us a much-needed path forward….(More)”.

Living Labs for Public Sector Innovation: An Integrative Literature Review


Paper by Lars Fuglsang, Anne Vorre Hansen, Ines Mergel, and Maria Taivalsaari Røhnebæk: “The public administration literature and adjacent fields have devoted increasing attention to living labs as environments and structures enabling the co-creation of public sector innovation. However, living labs remain a somewhat elusive concept and phenomenon, and there is a lack of understanding of its versatile nature. To gain a deeper understanding of the multiple dimensions of living labs, this article provides a review assessing how the environments, methods, and outcomes of living labs are addressed in the extant research literature. The findings are drawn together in a model synthesizing how living labs link to public sector innovation, followed by an outline of knowledge gaps and future research avenues….(More)”.

A fair data economy is built upon collaboration


Report by Heli Parikka, Tiina Härkönen and Jaana Sinipuro: “For a human-driven and fair data economy to work, it must be based on three important and interconnected aspects: regulation based on ethical values; technology; and new kinds of business models. With a human-driven approach, individual and social interests determine the business conditions and data is used to benefit individuals and society.

When developing a fair data economy, the aim has been to use existing technologies, operating models and concepts across the boundaries between different sectors. The goal is to enable not only new data-based business but also easier digital everyday life that is based on the more efficient and personal management of data. The human-driven approach is closely linked to the MyData concept.

At the beginning of the IHAN project, there were very few easy-to-use, individually tailored digital services. For example, the most significant data-based consumer services were designed on the basis of the needs of large corporations. To create demand, prevailing mindsets had to be changed and decision-makers needed to be encouraged to change direction, companies had to find new business with new business models and individuals had to be persuaded to demand change.

The terms and frameworks of the platform and data economies needed further clarification for the development of a fair data economy. We sought out examples from other sectors and found that, in addition to “human-driven”, another defining concept that emerged was “fair”, with fairness defined as a key goal in the IHAN project. A fair model also takes financial aspects into account and recognises the significance of companies and new services as a source of well-being.

Why did Sitra want to tackle this challenge to begin with? What had thus far been available to people was an unfair data economy model, which needed to be changed. The data economy direction had been defined by a handful of global companies, whose business models are based on collecting and managing data on their own platforms and on their own terms. There was a need to develop an alternative, a European data economy model.

One of the tasks of the future fund is to foresee future trends, the fair and human-driven use of data being one of them. The objective was to approach the theme in a pluralistic manner from the perspectives of different participants in society. Sitra’s unique position as an independent future fund made it possible to launch the project.

A fair data economy has become one of Sitra’s strategic spearheads and a new theme is being prepared at the time of the writing of this publication. The lessons learned and tools created so far will be moved under that theme and developed further, making them available to everyone who needs them….(More)“.

Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda


Paper by Anneke Zuiderwijk, Yu-Che Chen and Fadi Salem: “To lay the foundation for the special issue that this research article introduces, we present 1) a systematic review of existing literature on the implications of the use of Artificial Intelligence (AI) in public governance and 2) develop a research agenda. First, an assessment based on 26 articles on this topic reveals much exploratory, conceptual, qualitative, and practice-driven research in studies reflecting the increasing complexities of using AI in government – and the resulting implications, opportunities, and risks thereof for public governance. Second, based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda comprising eight process-related recommendations and seven content-related recommendations. Process-wise, future research on the implications of the use of AI for public governance should move towards more public sector-focused, empirical, multidisciplinary, and explanatory research while focusing more on specific forms of AI rather than AI in general. Content-wise, our research agenda calls for the development of solid, multidisciplinary, theoretical foundations for the use of AI for public governance, as well as investigations of effective implementation, engagement, and communication plans for government strategies on AI use in the public sector. Finally, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector….(More)”.

We Need to Reimagine the Modern Think Tank


Article by Emma Vadehra: “We are in the midst of a great realignment in policymaking. After an era-defining pandemic, which itself served as backdrop to a generations-in-the-making reckoning on racial injustice, the era of policy incrementalism is giving way to broad, grassroots demands for structural change. But elected officials are not the only ones who need to evolve. As the broader policy ecosystem adjusts to a post-2020 world, think tanks that aim to provide the intellectual backbone to policy movements—through research, data analysis, and evidence-based recommendation—need to change their approach as well.

Think tanks may be slower to adapt because of long-standing biases around what qualifies someone to be a policy “expert.” Traditionally, think tanks assess qualifications based on educational attainment and advanced degrees, which has often meant prioritizing academic credentials over lived or professional experience on the ground. These hiring preferences alone leave many people out of the debates that shape their lives: if think tanks expect a master’s degree for mid-level and senior research and policy positions, their pool of candidates will be limited to the 4 percent of Latinos and 7 percent of Black people with those degrees (lower than the rates among white people (10.5 percent) or Asian/Pacific Islanders (17 percent)). And in specific fields like Economics, from which many think tanks draw their experts, just 0.5 percent of doctoral degrees go to Black women each year.

Think tanks alone cannot change the larger cultural and societal forces that have historically limited access to certain fields. But they can change their own practices: namely, they can change how they assess expertise and who they recruit and cultivate as policy experts. In doing so, they can push the broader policy sector—including government and philanthropic donors—to do the same. Because while the next generation marches in the streets and runs for office, the public policy sector is not doing enough to diversify and support who develops, researches, enacts, and implements policy. And excluding impacted communities from the decision-making table makes our democracy less inclusive, responsive, and effective.

Two years ago, my colleagues and I at The Century Foundation, a 100-year-old think tank that has weathered many paradigm shifts in policymaking, launched an organization, Next100, to experiment with a new model for think tanks. Our mission was simple: policy by those with the most at stake, for those with the most at stake. We believed that proximity to the communities that policy looks to serve will make policy stronger, and we put muscle and resources behind the theory that those with lived experience are as much policy experts as anyone with a PhD from an Ivy League university. The pandemic and heightened calls for racial justice in the last year have only strengthened our belief in the need to thoughtfully democratize policy development. While it’s common understanding now that COVID-19 has surfaced and exacerbated profound historical inequities, not enough has been done to question why those inequities exist, or why they run so deep. How we make policy—and who makes it—is a big reason why….(More)”

Mapping European Attitudes towards Technological Change and its Governance.


European Tech Insights 2021 by Oscar Jonsson and Carlos Luca de Tena: “…is composed of two studies: Part I focuses on how the pandemic has altered our habits and perceptions with regards to healthcare, work, social networks and the urban space. Part II reveals how Europeans are embracing technologies (from AI to automation) and what are the implications for our democracies and societies.

One year on from the outbreak of Covid-19, the findings of European Tech Insights 2021 reveal that the pandemic has accelerated the acceptance of technologies among Europeans but also increased awareness of the downsides of technological development….

Democracy in the Digital Age

Not only are citizens changing their attitudes and becoming more willing to use new technologies; they are also supportive of democracy going digital.

– A vast majority of Europeans (72%) would like to be able to vote in elections through their smartphone, while only 17% would oppose it. Strongest support is found in Poland (80%), Estonia (79%), Italy (78%) and Spain (73%).

– 51% of Europeans support reducing the number of national parliamentarians and giving those seats to an algorithm. Over 60% of Europeans aged 25-34 and 56% of those aged 34-44 are excited about this idea.

Embracing Technology

The research found growing support towards increased adoption of AI and new uses of technology:

– One third of Europeans would prefer that AI algorithms decide their social welfare payments or approve their visa for working in a foreign country, rather than a human civil servant

– A majority of Europeans support the use of facial technology for verifying the identity of citizens if that makes their lives more convenient. Increased support is seen in Italy (56%), Sweden (47%) or The Netherlands (45%).

– More than a third of Europeans would prefer to have a package delivered to them by a robot rather than a human…..(More)”.