Invest 5% of research funds in ensuring data are reusable


Barend Mons at Nature: “It is irresponsible to support research but not data stewardship…

Many of the world’s hardest problems can be tackled only with data-intensive, computer-assisted research. And I’d speculate that the vast majority of research data are never published. Huge sums of taxpayer funds go to waste because such data cannot be reused. Policies for data reuse are falling into place, but fixing the situation will require more resources than the scientific community is willing to face.

In 2013, I was part of a group of Dutch experts from many disciplines that called on our national science funder to support data stewardship. Seven years later, policies that I helped to draft are starting to be put into practice. These require data created by machines and humans to meet the FAIR principles (that is, they are findable, accessible, interoperable and reusable). I now direct an international Global Open FAIR office tasked with helping communities to implement the guidelines, and I am convinced that doing so will require a large cadre of professionals, about one for every 20 researchers.

Even when data are shared, the metadata, expertise, technologies and infrastructure necessary for reuse are lacking. Most published data sets are scattered into ‘supplemental files’ that are often impossible for machines or even humans to find. These and other sloppy data practices keep researchers from building on each other’s work. In cases of disease outbreaks, for instance, this might even cost lives….(More)”.

Facial Recognition Software requires Checks and Balances


David Eaves,  and Naeha Rashid in Policy Options: “A few weeks ago, members of the Nexus traveller identification program were notified that Canadian Border Services is upgrading its automated system, from iris scanners to facial recognition technology. This is meant to simplify identification and increase efficiency without compromising security. But it also raises profound questions concerning how we discuss and develop public policies around such technology – questions that may not be receiving sufficiently open debate in the rush toward promised greater security.

Analogous to the U.S. Customs and Border Protection (CBP) program Global Entry, Nexus is a joint Canada-US border control system designed for low-risk, pre-approved travellers. Nexus does provide a public good, and there are valid reasons to improve surveillance at airports. Even before 9/11, border surveillance was an accepted annoyance and since then, checkpoint operations have become more vigilant and complex in response to the public demand for safety.

Nexus is one of the first North America government-sponsored services to adopt facial recognition, and as such it could be a pilot program that other services will follow. Left unchecked, the technology will likely become ubiquitous at North American border crossings within the next decade, and it will probably be adopted by governments to solve domestic policy challenges.

Facial recognition software is imperfect and has documented bias, but it will continue to improve and become superior to humans in identifying individuals. Given this, questions arise such as, what policies guide the use of this technology? What policies should inform future government use? In our headlong rush toward enhanced security, we risk replicating the justification the used by the private sector in an attempt to balance effectiveness, efficiency and privacy.

One key question involves citizens’ capacity to consent. Previously, Nexus members submitted to fingerprint and retinal scans – biometric markers that are relatively unique and enable government to verify identity at the border. Facial recognition technology uses visual data and seeks, analyzes, and stores identifying facial information in a database, which is then used to compare with new images and video….(More)”.

Mapping Wikipedia


Michael Mandiberg at The Atlantic: “Wikipedia matters. In a time of extreme political polarization, algorithmically enforced filter bubbles, and fact patterns dismissed as fake news, Wikipedia has become one of the few places where we can meet to write a shared reality. We treat it like a utility, and the U.S. and U.K. trust it about as much as the news.

But we know very little about who is writing the world’s encyclopedia. We do know that just because anyone can edit, doesn’t mean that everyone does: The site’s editors are disproportionately cis white men from the global North. We also know that, as with most of the internet, a small number of the editors do a large amount of the editing. But that’s basically it: In the interest of improving retention, the Wikimedia Foundation’s own research focuses on the motivations of people who do edit, not on those who don’t. The media, meanwhile, frequently focus on Wikipedia’s personality stories, even when covering the bigger questions. And Wikipedia’s own culture pushes back against granular data harvesting: The Wikimedia Foundation’s strong data-privacy rules guarantee users’ anonymity and limit the modes and duration of their own use of editor data.

But as part of my research in producing Print Wikipedia, I discovered a data set that can offer an entry point into the geography of Wikipedia’s contributors. Every time anyone edits Wikipedia, the software records the text added or removed, the time of the edit, and the username of the editor. (This edit history is part of Wikipedia’s ethos of radical transparency: Everyone is anonymous, and you can see what everyone is doing.) When an editor isn’t logged in with a username, the software records that user’s IP address. I parsed all of the 884 million edits to English Wikipedia to collect and geolocate the 43 million IP addresses that have edited English Wikipedia. I also counted 8.6 million username editors who have made at least one edit to an article.

The result is a set of maps that offer, for the first time, insight into where the millions of volunteer editors who build and maintain English Wikipedia’s 5 million pages are—and, maybe more important, where they aren’t….

Like the Enlightenment itself, the modern encyclopedia has a history entwined with colonialism. Encyclopédie aimed to collect and disseminate all the world’s knowledge—but in the end, it could not escape the biases of its colonial context. Likewise, Napoleon’s Description de l’Égypte augmented an imperial military campaign with a purportedly objective study of the nation, which was itself an additional form of conquest. If Wikipedia wants to break from the past and truly live up to its goal to compile the sum of all human knowledge, it requires the whole world’s participation….(More)”.

Irreproducibility is not a sign of failure, but an inspiration for fresh ideas


Editorial at Nature: “Everyone’s talking about reproducibility — or at least they are in the biomedical and social sciences. The past decade has seen a growing recognition that results must be independently replicated before they can be accepted as true.

A focus on reproducibility is necessary in the physical sciences, too — an issue explored in this month’s Nature Physics, in which two metrologists argue that reproducibility should be viewed through a different lens. When results in the science of measurement cannot be reproduced, argue Martin Milton and Antonio Possolo, it’s a sign of the scientific method at work — and an opportunity to promote public awareness of the research process (M. J. T. Milton and A. Possolo Nature Phys26, 117–119; 2020)….

However, despite numerous experiments spanning three centuries, the precise value of G remains uncertain. The root of the uncertainty is not fully understood: it could be due to undiscovered errors in how the value is being measured; or it could indicate the need for new physics. One scenario being explored is that G could even vary over time, in which case scientists might have to revise their view that it has a fixed value.

If that were to happen — although physicists think it unlikely — it would be a good example of non-reproduced data being subjected to the scientific process: experimental results questioning a long-held theory, or pointing to the existence of another theory altogether.

Questions in biomedicine and in the social sciences do not reduce so cleanly to the determination of a fundamental constant of nature. Compared with metrology, experiments to reproduce results in fields such as cancer biology are likely to include many more sources of variability, which are fiendishly hard to control for.

But metrology reminds us that when researchers attempt to reproduce the results of experiments, they do so using a set of agreed — and highly precise — experimental standards, known in the measurement field as metrological traceability. It is this aspect, the authors contend, that helps to build trust and confidence in the research process….(More)”.

Community science: A typology and its implications for governance of social-ecological systems


Paper by Anthony Charles, Laura Loucks, Fikret Berkes, and Derek Armitage: “There is an increasing recognition globally of the role to be played by community science –scientific research and monitoring driven and controlled by local communities, and characterized by place-based knowledge, social learning, collective action and empowerment. In particular, community science can support social-ecological system transformation, and help in achieving better ‘fit’ between ecological systems and governance, at local and higher levels of decision making.

This paper draws on three examples of communities as central actors in the process of knowledge co-production to present a typology of community science, and to deduce a set of key principles/conditions for success.

The typology involves three social learning models in which the community acquires scientific knowledge by (1) engaging with external bodies, (2) drawing on internal volunteer scientific expertise, and/or (3) hiring (or contracting) in-house professional scientific expertise. All of these models share the key characteristic that the local community decides with whom they wish to engage, and in each case, social learning is fundamental. Some conditions that facilitate community science include: community-driven and community-control; flexibility across leadership models; connection to place and collective values; empowerment, agency and collective action; credible trust; local knowledge; and links to governance.

Community science is not a panacea for effecting change at the local level, and there is need for critical assessment of how it can help to fill governance gaps. Nevertheless, a considerable body of experience globally illustrates how local communities are drawing effectively on community science for better conservation and livelihood outcomes, in a manner compatible with broader trends toward ecosystem-based management and local stewardship….(More)”.

Re-imagining “Action Research” as a Tool for Social Innovation and Public Entrepreneurship


Stefaan G. Verhulst at The GovLab: “We live in challenging times. From climate change to economic inequality and forced migration, the difficulties confronting decision-makers are unprecedented in their variety, as well as in their complexity and urgency. Our standard policy toolkit seems stale and ineffective while existing governance institutions are increasingly outdated and distrusted.

To tackle today’s challenges, we need not only new solutions but new ways of arriving at solutions. In particular, we need fresh research methodologies that can provide actionable insights on 21st century conditions. Such methodologies would allow us to redesign how decisions are made, how public services are offered, and how complex problems are solved around the world. 

Rethinking research is a vast project, with multiple components. This new essay focuses on one particular area of research: action research. In the essay, I first explain what we mean by action research, and also explore some of its potential. I subsequently argue that, despite that potential, action research is often limited as a method because it remains embedded in past methodologies; I attempt to update both its theory and practice for the 21st century.

Although this article represents only a beginning, my broader goal is to re-imagine the role of action research for social innovation, and to develop an agenda that could provide for what Amar Bhide calls “practical knowledge” at all levels of decision making in a systematic, sustainable, and responsible manner.  (Full Essay Here).”

Imagining the Next Decade of Behavioral Science


Evan Nesterak at the Behavioral Scientist: “If you asked Richard Thaler in 2010, what he thought would become of the then very new field of behavioral science over the next decade, he would have been wrong, at least for the most part. Could he have predicted the expansion of behavioral economics research? Probably. The Nobel Prize? Maybe. The nearly 300 and counting behavioral teams in governments, businesses, and other organizations around the world? Not a chance. 

When we asked him a year and a half ago to sum up the 10 years since the publication of Nudgehe replied “Am I too old to just say OMG? … [Cass Sunstein and I] would never have anticipated one “nudge unit” much less 200….Every once in a while, one of us will send the other an email that amounts to just ‘wow.’”

As we closed last year (and the last decade), we put out a call to help us imagine the next decade of behavioral science. We asked you to share your hopes and fears, predictions and warnings, open questions and big ideas. 

We received over 120 submissions from behavioral scientists around the world. We picked the most thought-provoking submissions and curated them below.

We’ve organized the responses into three sections. The first section, Promises and Pitfalls, houses the responses about the field as whole—its identity, purpose, values. In that section, you’ll find authors challenging the field to be bolder. You’ll also find ideas to unite the field, which in its growth has felt for some like the “Wild West.” Ethical concerns are also top of mind. “Behavioral science has confronted ethical dilemmas before … but never before has the essence of the field been so squarely in the wheelhouse of corporate interests,” writes Phillip Goff.

In the second section, we’ve placed the ideas about specific domains. This includes “Technology: Nightmare or New Norm,” where Tania Ramos considers the possibility of a behaviorally optimized tech dystopia. In “The Future of Work,” Lazslo Bock imagines that well-timed, intelligent nudges will foster healthier company cultures, and Jon Jachomiwcz emphasizes the importance of passion in an economy increasingly dominated by A.I. In “Climate Change: Targeting Individuals and Systems” behavioral scientists grapple with how the field can pull its weight in this existential fight. You’ll also find sections on building better governments, health care at the digital frontier and final mile, and the next steps for education. 

The third and final section gets the most specific of all. Here you’ll find commentary on the opportunities (and obligations) for research and application. For instance, George Lowenstein suggests we pay more attention to attention—an increasingly scarce resource. Others, on the application side, ponder how behavioral science will influence the design of our neighborhoods and wonder what it will take to bring behavioral science into the courtroom. The section closes with ideas on the future of intervention design and ways we can continue to master our methods….(More)”.

The Experimenter’s Inventory: A catalogue of experiments for decision-makers and professionals


Report by the Alliance for Useful Evidence: “This inventory is about how you can use experiments to solve public and social problems. It aims to provide a framework for thinking about the choices available to a government, funder or delivery organisation that wants to experiment more effectively. We aim to simplify jargon and do some myth-busting on common misperceptions.
There are other guides on specific areas of experimentation – such as on randomised controlled trials – including many specialist technical textbooks. This is not a technical manual or guide about how to run experiments. Rather, this inventory is useful for anybody wanting a jargon-free overview of the types and uses of experiments. It is unique in its breadth – covering the whole landscape of social and policy experimentation, including prototyping, rapid cycle testing, quasi-experimental designs, and a range of different types of randomised trials. Experimentation can be a confusing landscape – and there are competing definitions about what constitutes an experiment among researchers, innovators and evaluation practitioners. We take a pragmatic approach, including different designs that are useful for public problem-solving, under our experimental umbrella. We cover ways of experimenting that are both qualitative and quantitative, and highlight what we can learn from different approaches….(More)”.

Information literacy in the age of algorithms


Report by Alison J. Head, Ph.D., Barbara Fister, Margy MacMillan: “…Three sets of questions guided this report’s inquiry:

  1. What is the nature of our current information environment, and how has it influenced how we access, evaluate, and create knowledge today? What do findings from a decade of PIL research tell us about the information skills and habits students will need for the future?
  2. How aware are current students of the algorithms that filter and shape the news and information they encounter daily? What
    concerns do they have about how automated decision-making systems may influence us, divide us, and deepen inequalities?
  3. What must higher education do to prepare students to understand the new media landscape so they will be able to participate in sharing and creating information responsibly in a changing and challenged world?
    To investigate these questions, we draw on qualitative data that PIL researchers collected from student focus groups and faculty interviews during fall 2019 at eight U.S. colleges and universities. Findings from a sample of 103 students and 37 professors reveal levels of awareness and concerns about the age of algorithms on college campuses. They are presented as research takeaways….(More)”.

Global problems need social science


Hetan Shah at Nature: “Without human insights, data and the hard sciences will not meet the challenges of the next decade…

I worry about the fact that the call prioritized science and technology over the humanities and social sciences. Governments must make sure they also tap into that expertise, or they will fail to tackle the challenges of this decade.

For example, we cannot improve global health if we take only a narrow medical view. Epidemics are social as well as biological phenomena. Anthropologists such as Melissa Leach at the University of Sussex in Brighton, UK, played an important part in curbing the West African Ebola epidemic with proposals to substitute risky burial rituals with safer ones, rather than trying to eliminate such rituals altogether.

Treatments for mental health have made insufficient progress. Advances will depend, in part, on a better understanding of how social context influences whether treatment succeeds. Similar arguments apply to the problem of antimicrobial resistance and antibiotic overuse.

Environmental issues are not just technical challenges that can be solved with a new invention. To tackle climate change we will need insight from psychology and sociology. Scientific and technological innovations are necessary, but enabling them to make an impact requires an understanding of how people adapt and change their behaviour. That will probably require new narratives — the purview of rhetoric, literature, philosophy and even theology.

Poverty and inequality call even more obviously for expertise beyond science and maths. The UK Economic and Social Research Council has recognized that poor productivity in the country is a big problem, and is investing up to £32.4 million (US$42 million) in a new Productivity Institute in an effort understand the causes and potential remedies.

Policy that touches on national and geographical identity also needs scholarly input. What is the rise of ‘Englishness’? How do we live together in a community of diverse races and religions? How is migration understood and experienced? These intangibles have real-world consequences, as demonstrated by the Brexit vote and ongoing discussions about whether the United Kingdom has a future as a united kingdom. It will take the work of historians, social psychologists and political scientists to help shed light on these questions. I could go on: fighting against misinformation; devising ethical frameworks for artificial intelligence. These are issues that cannot be tackled with better science alone….(More)”.