Influencers: Looking beyond the consensus of the crowd.


Article by Wilfred M. McClay: “Those of us who take a loving interest in words—their etymological forebears, their many layers of meaning, their often-surprising histories—have a tendency to resist change. Not that we think playfulness should be proscribed—such pedantry would be a cure worse than any disease. It’s just that we are also drawn, like doting parents, into wanting to protect the language, and thus become suspicious of mysterious strangers, of the introduction of new words, and of new meanings for familiar ones.

When we find words being used in a novel way, our countenances tend to stiffen. What’s going on here? Is this a euphemism? Is there a hidden agenda here?

But there are times when the older language seems inadequate, and in fact may mislead us into thinking that the world has not changed. New signifiers may sometimes be necessary, in order to describe new things.

Such is unquestionably the case of the new/old word influencer. At first glance, it looks harmless and insignificant, a lazy and imprecise way of designating someone as influential. But the word’s use as a noun is the key to what is different and new about it. And much as I dislike the word, and dislike the phenomenon it describes, necessity seems to have dictated that such a word be created…(More)”.

Experts in Government


Book by Donald F. Kettl: “From Caligula and the time of ancient Rome to the present, governments have relied on experts to manage public programs. But with that expertise has come power, and that power has long proven difficult to hold accountable. The tension between experts in the bureaucracy and the policy goals of elected officials, however, remains a point of often bitter tension. President Donald Trump labeled these experts as a ‘deep state’ seeking to resist the policies he believed he was elected to pursue—and he developed a policy scheme to make it far easier to fire experts he deemed insufficiently loyal. The age-old battles between expertise and accountability have come to a sharp point, and resolving these tensions requires a fresh look at the rule of law to shape the role of experts in governance…(More)”.

Informing Decisionmakers in Real Time


Article by Robert M. Groves: “In response, the National Science Foundation (NSF) proposed the creation of a complementary group to provide decisionmakers at all levels with the best available evidence from the social sciences to inform pandemic policymaking. In May 2020, with funding from NSF and additional support from the Alfred P. Sloan Foundation and the David and Lucile Packard Foundation, NASEM established the Societal Experts Action Network (SEAN) to connect “decisionmakers grappling with difficult issues to the evidence, trends, and expert guidance that can help them lead their communities and speed their recovery.” We chose to build a network because of the widespread recognition that no one small group of social scientists would have the expertise or the bandwidth to answer all the questions facing decisionmakers. What was needed was a structure that enabled an ongoing feedback loop between researchers and decisionmakers. This structure would foster the integration of evidence, research, and advice in real time, which broke with NASEM’s traditional form of aggregating expert guidance over lengthier periods.

In its first phase, SEAN’s executive committee set about building a network that could both gather and disseminate knowledge. To start, we brought in organizations of decisionmakers—including the National Association of Counties, the National League of Cities, the International City/County Management Association, and the National Conference of State Legislatures—to solicit their questions. Then we added capacity to the network by inviting social and behavioral organizations—like the National Bureau of Economic Research, the National Hazards Center at the University of Colorado Boulder, the Kaiser Family Foundation, the National Opinion Research Center at the University of Chicago, The Policy Lab at Brown University, and Testing for America—to join and respond to questions and disseminate guidance. In this way, SEAN connected teams of experts with evidence and answers to leaders and communities looking for advice…(More)”.

Matchmaking Research To Policy: Introducing Britain’s Areas Of Research Interest Database


Article by Kathryn Oliver: “Areas of research interest (ARIs) were originally recommended in the 2015 Nurse Review, which argued that if government stated what it needed to know more clearly and more regularly, then it would be easier for policy-relevant research to be produced.

During our time in government, myself and Annette Boaz worked to develop these areas of research interest, mobilize experts and produce evidence syntheses and other outputs addressing them, largely in response to the COVID pandemic. As readers of this blog will know, we have learned a lot about what it takes to mobilize evidence – the hard, and often hidden labor of creating and sustaining relationships, being part of transient teams, managing group dynamics, and honing listening and diplomatic skills.

Some of the challenges we encountered include the oft-cited, cultural gap between research and policy, the relevance of evidence, and the difficulty in resourcing knowledge mobilization and evidence synthesis require systemic responses. However, one challenge, the information gap noted by Nurse, between researchers and what government departments actually want to know offered a simpler solution.

Up until September 2023, departmental ARIs were published on gov.uk, in pdf or html format. Although a good start, we felt that having all the ARIs in one searchable database would make them more interactive and accessible. So, working with Overton, we developed the new ARI database. The primary benefit of the database will be to raise awareness of ARIs (through email alerts about new ARIs) and accessibility (by holding all ARIs in one place which is easily searchable)…(More)”.

Diversity of Expertise is Key to Scientific Impact


Paper by Angelo Salatino, Simone Angioni, Francesco Osborne, Diego Reforgiato Recupero, Enrico Motta: “Understanding the relationship between the composition of a research team and the potential impact of their research papers is crucial as it can steer the development of new science policies for improving the research enterprise. Numerous studies assess how the characteristics and diversity of research teams can influence their performance across several dimensions: ethnicity, internationality, size, and others. In this paper, we explore the impact of diversity in terms of the authors’ expertise. To this purpose, we retrieved 114K papers in the field of Computer Science and analysed how the diversity of research fields within a research team relates to the number of citations their papers received in the upcoming 5 years. The results show that two different metrics we defined, reflecting the diversity of expertise, are significantly associated with the number of citations. This suggests that, at least in Computer Science, diversity of expertise is key to scientific impact…(More)”.

Advising in an Imperfect World – Expert Reflexivity and the Limits of Data


Article by Justyna Bandola-Gill, Marlee Tichenor and Sotiria Grek: “Producing and making use of data and metrics in policy making have important limitations – from practical issues with missing or incomplete data to political challenges of navigating both the intended and unintended consequences of implementing monitoring and evaluation programmes. But how do experts producing quantified evidence make sense of these challenges and how do they navigate working in imperfect statistical environments? In our recent study, drawing on over 80 interviews with experts working in key International Organisations, we explored these questions by looking at the concept of expert reflexivity.

We soon discovered that experts working with data and statistics approach reflexivity not only as a thought process but also as an important strategic resource they use to work effectively – to negotiate with different actors and their agendas, build consensus and support diverse groups of stakeholders. What is even more important, reflexivity is a complex and multifaceted process and one that is often not discussed explicitly in expert work. We aimed to capture this diversity by categorising experts’ actions and perceptions into three types of reflexivity: epistemic, care-ful and instrumental. Experts mix and match these different modes, depending on their goals, preferences, strategic goals or even personal characteristics.

Epistemic reflexivity regards the quality of data and measurement and allows for a reflection on how well (or how ineffectively) metrics represent real-life problems. Here, the experts discussed how they negotiate the necessary limits to data and metrics with the awareness of the far-reaching implications of publishing official numbers.  They recognised that data and metrics do not mirror reality and critically reflected on what aspects of measured problems – such as health, poverty or education – get misrepresented in the process of measurement. And sometimes, it actually meant advising against measurement to avoid producing and reproducing uncertainty.

Care-ful reflexivity allows for imbuing quantified practices with values and care for the populations affected by the measurement. Experts positioned themselves as active participants in the process of solving challenges and advocating for disadvantaged groups (and did so via numbers). This type of reflexivity was also mobilised to make sense of the key challenge of expertise, one that would be familiar to anyone advocating for evidence-informed decision-making:  our interviewees acknowledged that the production of numbers very rarely leads to change. The key motivator to keep going despite this, was the duty of care for the populations on whose behalf the numbers spoke. Experts believed that being ‘care-ful’ required them to monitor levels of different forms of inequalities, even if it was just to acknowledge the problem and expose it rather than solve it…(More)”.

The limits of expert judgment: Lessons from social science forecasting during the pandemic


Article by Cendri Hutcherson  Michael Varnum Imagine being a policymaker at the beginning of the COVID-19 pandemic. You have to decide which actions to recommend, how much risk to tolerate and what sacrifices to ask your citizens to bear.

Who would you turn to for an accurate prediction about how people would react? Many would recommend going to the experts — social scientists. But we are here to tell you this would be bad advice.

As psychological scientists with decades of combined experience studying decision-makingwisdomexpert judgment and societal change, we hoped social scientists’ predictions would be accurate and useful. But we also had our doubts.

Our discipline has been undergoing a crisis due to failed study replications and questionable research practices. If basic findings can’t be reproduced in controlled experiments, how confident can we be that our theories can explain complex real-world outcomes?

To find out how well social scientists could predict societal change, we ran the largest forecasting initiative in our field’s history using predictions about change in the first year of the COVID-19 pandemic as a test case….

Our findings, detailed in peer-reviewed papers in Nature Human Behaviour and in American Psychologist, paint a sobering picture. Despite the causal nature of most theories in the social sciences, and the fields’ emphasis on prediction in controlled settings, social scientists’ forecasts were generally not very good.

In both papers, we found that experts’ predictions were generally no more accurate than those made by samples of the general public. Further, their predictions were often worse than predictions generated by simple statistical models.

Our studies did still give us reasons to be optimistic. First, forecasts were more accurate when teams had specific expertise in the domain they were making predictions in. If someone was an expert in depression, for example, they were better at predicting societal trends in depression.

Second, when teams were made up of scientists from different fields working together, they tended to do better at forecasting. Finally, teams that used simpler models to generate their predictions and made use of past data generally outperformed those that didn’t.

These findings suggest that, despite the poor performance of the social scientists in our studies, there are steps scientists can take to improve their accuracy at this type of forecasting….(More)”.

Professional expertise in Policy Advisory Systems: How administrators and consultants built Behavioral Insights in Danish public agencies


Paper by Jakob Laage-Thomsen: “Recent work on consultants and academics in public policy has highlighted their transformational role. The paper traces how, in the absence of an explicit government strategy, external advisors establish different organizational arrangements to build Behavioral Insights in public agencies as a new form of administrative expertise. This variation shows the importance of the politico-administrative context within which external advisors exert influence. The focus on professional expertise adds to existing understandings of ideational compatibility in contemporary Policy Advisory Systems. Inspired by the Sociology of Professions, expertise is conceptualized as professionally constructed sets of diagnosis, inference, and treatment. The paper compares four Danish governmental agencies since 2010, revealing the central roles external advisors play in facilitating new policy ideas and diffusing new forms of expertise. This has implications for how we think of administrative expertise in contemporary bureaucracies, and the role of external advisors in fostering new forms of expertise….(More)”.

Institutions, Experts & the Loss of Trust


Essay by Henry E. Brady and Kay Lehman Schlozman: “Institutions are critical to our personal and societal well-being. They develop and disseminate knowledge, enforce the law, keep us healthy, shape labor relations, and uphold social and religious norms. But institutions and the people who lead them cannot fulfill their missions if they have lost legitimacy in the eyes of the people they are meant to serve.

Americans’ distrust of Congress is long-standing. What is less well-documented is how partisan polarization now aligns with the growing distrust of institutions once thought of as nonpolitical. Refusals to follow public health guidance about COVID-19, calls to defund the police, the rejection of election results, and disbelief of the press highlight the growing polarization of trust. But can these relationships be broken? And how does the polarization of trust affect institutions’ ability to confront shared problems, like climate change, epidemics, and economic collapse?…(More)”.

Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty


Paper by Nate Breznau et al: “This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to include conscious and unconscious decisions that researchers make during data analysis and that may lead to diverging results. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of research based on secondary data, we find that research teams reported widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predicted the wide variation in research outcomes. More than 90% of the total variance in numerical results remained unexplained even after accounting for research decisions identified via qualitative coding of each team’s workflow. This reveals a universe of uncertainty that is hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a new explanation for why many scientific hypotheses remain contested. It calls for greater humility and clarity in reporting scientific findings..(More)”.