Co-Producing Sustainability Research with Citizens: Empirical Insights from Co-Produced Problem Frames with Randomly Selected Citizens


Paper by Mareike Blum: “In sustainability research, knowledge co-production can play a supportive role at the science-policy interface (Norström et al., 2020). However, so far most projects involved stakeholders in order to produce ‘useful knowledge’ for policy-makers. As a novel approach, research projects have integrated randomly selected citizens during the knowledge co-production to make policy advice more reflective of societal perspectives and thereby increase its epistemic quality. Researchers are asked to consider citizens’ beliefs and values and integrate these in their ongoing research. This approach rests on pragmatist philosophy, according to which a joint deliberation on value priorities and anticipated consequences of policy options ideally allows to co-develop sustainable and legitimate policy pathways (Edenhofer & Kowarsch, 2015; Kowarsch, 2016). This paper scrutinizes three promises of involving citizens in the problem framing: (1) creating input legitimacy, (2) enabling social learning among citizens and researchers and (3) resulting in high epistemic quality of the co-produced knowledge. Based on empirical data the first phase of two research projects in Germany were analysed and compared: The Ariadne research project on the German Energy Transition, and the Biesenthal Forest project at the local level in Brandenburg, Germany. We found that despite barriers exist; learning was enabled by confronting researchers with problem perceptions of citizens. The step when researchers interpret and translate problem frames in the follow-up knowledge production is most important to assess learning and epistemic quality…(More)”.

A Massive LinkedIn Study Reveals Who Actually Helps You Get That Job


Article by Viviane Callier : “If you want a new job, don’t just rely on friends or family. According to one of the most influential theories in social science, you’re more likely to nab a new position through your “weak ties,” loose acquaintances with whom you have few mutual connections. Sociologist Mark Granovetter first laid out this idea in a 1973 paper that has garnered more than 65,000 citations. But the theory, dubbed “the strength of weak ties,” after the title of Granovetter’s study, lacked causal evidence for decades. Now a sweeping study that looked at more than 20 million people on the professional social networking site LinkedIn over a five-year period finally shows that forging weak ties does indeed help people get new jobs. And it reveals which types of connections are most important for job hunters…Along with job seekers, policy makers could also learn from the new paper. “One thing the study highlights is the degree to which algorithms are guiding fundamental, baseline, important outcomes, like employment and unemployment,” Aral says. The role that LinkedIn’s People You May Know function plays in gaining a new job demonstrates “the tremendous leverage that algorithms have on employment and probably other factors of the economy as well.” It also suggests that such algorithms could create bellwethers for economic changes: in the same way that the Federal Reserve looks at the Consumer Price Index to decide whether to hike interest rates, Aral suggests, networks such as LinkedIn might provide new data sources to help policy makers parse what is happening in the economy. “I think these digital platforms are going to be an important source of that,” he says…(More)”

Global Digital Governance Through the Back Door of Corporate Regulation


Paper by Orit Fischman Afori: “Today, societal life is increasingly conducted in the digital sphere, in which two core attributes are prominent: this sphere is entirely controlled by enormous technology companies, and these companies are increasingly deploying artificial intelligence (AI) technologies. This reality generates a severe threat to democratic principles and human rights. Therefore, regulating the conduct of the companies ruling the digital sphere is an urgent agenda worldwide. Policymakers and legislatures around the world are taking their first steps in establishing a digital governance regime, with leading proposals in the EU. Although it is understood that there is a necessity to adopt a comprehensive framework for imposing accountability standards on technology companies and on the operation of AI technologies, traditional perceptions regarding the limits of intervention in the private sector and contemporary perceptions regarding the limits of antitrust tools hinder such legal moves.

Given the obstacles inherent in the use of existing legal means for introducing a digital governance regime, this article proposes a new path of corporate governance regulations. The proposal, belonging to a “second wave” of regulatory models for the digital sphere, is based on the understanding that the current complex technological reality requires sophisticated and pragmatic legal measures for establishing an effective framework for digital governance norms. Corporate governance is a system of rules and practices by which companies are guided and controlled. Because the digital sphere is governed by private corporations, it seems reasonable to introduce the desired digital governance principles through a framework that regulates corporations. The bedrock of corporate governance is promoting principles of corporate accountability, which are translated into a wide array of obligations. In the last two decades, corporate accountability has evolved into a new domain of corporate social responsibility (CSR), promoting environmental, social, and governance (ESG) purposes not aimed at maximizing profits in the short term. The various benefits of the complex corporate governance mechanisms may be used to promote the desired digital governance regime that would be applied by the technology companies. A key advantage of the corporate governance mechanism is its potential to serve as a vehicle to promulgate norms in the era of multinational corporations. Because the digital sphere is governed by a few giant US companies, corporate governance may be leveraged to promote digital governance principles with a global reach in a uniform manner…(More)”

Overcoming Data Graveyards in Official Statistics: Catalyzing Uptake and Use


Report by Trends and Open Data Watch: “The world is awash in information. Every day, an estimated 1.1 billion gigabytes of data are produced, and this number will increase as mobile connections continue to expand and new ways of gathering data are incorporated by the private and public sectors to improve their products and services. The volume of statistics published by government agencies such as National Statistics Offices (NSOs) has also grown. New technologies offer new ways of gathering, storing, and disseminating data and producers of official statistics are releasing more information in more detailed ways through data portals and other mechanisms than ever before.

Once produced, data may live forever, but far too often, the data produced are not what data users are looking for or users lack the awareness or technical skill to use the data. As a result, data fall into data graveyards (Custer, 2017) where they go unutilized and prevent evidence-informed policies from being made. This is dangerous particularly at a time when intersecting crises like the COVID-19 pandemic, climate change, and energy and food insecurity put a premium on decision-making that incorporates the best data. In addition, public sector producers of data, who do so using public funds, need evidence of the use of their data to justify investments in data.

Data use remains a complex topic, with many policymakers and managers in national statistical system agencies unclear about this issue and how to improve their practices to ensure uptake and use. With conceptual clarity and best practices in hand, these actors can improve their practices and better address the needs of data users, while recognizing that a ‘one size fits all’ approach will not be suitable for countries at various stages of statistical capacity….(More)”

Five-year campaign breaks science’s citation paywall


Article by Dalmeet Singh Chawla: “The more than 60 million scientific-journal papers indexed by Crossref — the database that registers DOIs, or digital object identifiers, for many of the world’s academic publications — now contain reference lists that are free to access and reuse.

The milestone, announced on Twitter on 18 August, is the result of an effort by the Initiative for Open Citations (I4OC), launched in 2017. Open-science advocates have for years campaigned to make papers’ citation data accessible under liberal copyright licences so that they can be studied, and those analyses shared. Free access to citations enables researchers to identify research trends, lets them conduct studies on which areas of research need funding, and helps them to spot when scientists are manipulating citation counts….

The move means that bibliometricians, scientometricians and information scientists will be able to reuse citation data in any way they please under the most liberal copyright licence, called CC0. This, in turn, allows other researchers to build on their work.

Before I4OC, researchers generally had to obtain permission to access data from major scholarly databases such as Web of Science and Scopus, and weren’t able to share the samples.

However, the opening up of Crossref articles’ citations doesn’t mean that all the world’s scholarly content now has open references. Although most major international academic publishers, including Elsevier, Springer Nature (which publishes Nature) and Taylor & Francis, index their papers on Crossref, some do not. These often include regional and non-English-language publications.

I4OC co-founder Dario Taraborelli, who is science programme officer at the Chan Zuckerberg Initiative and based in San Francisco, California, says that the next challenge will be to encourage publishers who don’t already deposit reference data in Crossref to do so….(More)”.

Uncovering the genetic basis of mental illness requires data and tools that aren’t just based on white people


Article by Hailiang Huang: “Mental illness is a growing public health problem. In 2019, an estimated 1 in 8 people around the world were affected by mental disorders like depression, schizophrenia or bipolar disorder. While scientists have long known that many of these disorders run in families, their genetic basis isn’t entirely clear. One reason why is that the majority of existing genetic data used in research is overwhelmingly from white people.

In 2003, the Human Genome Project generated the first “reference genome” of human DNA from a combination of samples donated by upstate New Yorkers, all of whom were of European ancestry. Researchers across many biomedical fields still use this reference genome in their work. But it doesn’t provide a complete picture of human genetics. Someone with a different genetic ancestry will have a number of variations in their DNA that aren’t captured by the reference sequence.

When most of the world’s ancestries are not represented in genomic data sets, studies won’t be able to provide a true representation of how diseases manifest across all of humanity. Despite this, ancestral diversity in genetic analyses hasn’t improved in the two decades since the Human Genome Project announced its first results. As of June 2021, over 80% of genetic studies have been conducted on people of European descent. Less than 2% have included people of African descent, even though these individuals have the most genetic variation of all human populations.

To uncover the genetic factors driving mental illness, ISinéad Chapman and our colleagues at the Broad Institute of MIT and Harvard have partnered with collaborators around the world to launch Stanley Global, an initiative that seeks to collect a more diverse range of genetic samples from beyond the U.S. and Northern Europe, and train the next generation of researchers around the world. Not only does the genetic data lack diversity, but so do the tools and techniques scientists use to sequence and analyze human genomes. So we are implementing a new sequencing technology that addresses the inadequacies of previous approaches that don’t account for the genetic diversity of global populations…(More).

Income Inequality Is Rising. Are We Even Measuring It Correctly?


Article by Jon Jachimowicz et al: “Income inequality is on the rise in many countries around the world, according to the United Nations. What’s more, disparities in global income were exacerbated by the COVID-19 pandemic, with some countries facing greater economic losses than others.

Policymakers are increasingly focusing on finding ways to reduce inequality to create a more just and equal society for all. In making decisions on how to best intervene, policymakers commonly rely on the Gini coefficient, a statistical measure of resource distribution, including wealth and income levels, within a population. The Gini coefficient measures perfect equality as zero and maximum inequality as one, with higher numbers indicating a greater concentration of resources in the hands of a few.

This measure has long dominated our understanding (pdf) of what inequality means, largely because this metric is used by governments around the world, is released by statistics bureaus in multiple countries, and is commonly discussed in news media and policy discussions alike.

In our paper, recently published in Nature Human Behaviour, we argue that researchers and policymakers rely too heavily on the Gini coefficient—and that by broadening our understanding of how we measure inequality, we can both uncover its impact and intervene to more effectively correct It…(More)”.

The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

Blue Spoons: Sparking Communication About Appropriate Technology Use


Paper by Arun G. Chandrasekhar, Esther Duflo, Michael Kremer, João F. Pugliese, Jonathan Robinson & Frank Schilbach: “An enduring puzzle regarding technology adoption in developing countries is that new technologies often diffuse slowly through the social network. Two of the key predictions of the canonical epidemiological model of technology diffusion are that forums to share information and higher returns to technology should both spur social transmission. We design a large-scale experiment to test these predictions among farmers in Western Kenya, and we fail to find support for either. However, in the same context, we introduce a technology that diffuses very fast: a simple kitchen spoon (painted in blue) to measure out how much fertilizer to use. We develop a model that explains both the failure of the standard approaches and the surprising success of this new technology. The core idea of the model is that not all information is reliable, and farmers are reluctant to develop a reputation of passing along false information. The model and data suggest that there is value in developing simple, transparent technologies to facilitate communication…(More)”.

Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous


Essay by Henry Farrell, Abraham Newman, and Jeremy Wallace: “In policy circles, discussions about artificial intelligence invariably pit China against the United States in a race for technological supremacy. If the key resource is data, then China, with its billion-plus citizens and lax protections against state surveillance, seems destined to win. Kai-Fu Lee, a famous computer scientist, has claimed that data is the new oil, and China the new OPEC. If superior technology is what provides the edge, however, then the United States, with its world class university system and talented workforce, still has a chance to come out ahead. For either country, pundits assume that superiority in AI will lead naturally to broader economic and military superiority.

But thinking about AI in terms of a race for dominance misses the more fundamental ways in which AI is transforming global politics. AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.

Early pioneers of AI, including the political scientist Herbert Simon, realized that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking. Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force…(More)”