How Do You Prove a Secret?


Essay by Sheon Han: “Imagine you had some useful knowledge — maybe a secret recipe, or the key to a cipher. Could you prove to a friend that you had that knowledge, without revealing anything about it? Computer scientists proved over 30 years ago that you could, if you used what’s called a zero-knowledge proof.

For a simple way to understand this idea, let’s suppose you want to show your friend that you know how to get through a maze, without divulging any details about the path. You could simply traverse the maze within a time limit, while your friend was forbidden from watching. (The time limit is necessary because given enough time, anyone can eventually find their way out through trial and error.) Your friend would know you could do it, but they wouldn’t know how.

Zero-knowledge proofs are helpful to cryptographers, who work with secret information, but also to researchers of computational complexity, which deals with classifying the difficulty of different problems. “A lot of modern cryptography relies on complexity assumptions — on the assumption that certain problems are hard to solve, so there has always been some connections between the two worlds,” said Claude Crépeau, a computer scientist at McGill University. “But [these] proofs have created a whole world of connection.”

Zero-knowledge proofs belong to a category known as interactive proofs, so to learn how the former work, it helps to understand the latter. First described in a 1985 paper by the computer scientists Shafi Goldwasser, Silvio Micali and Charles Rackoff, interactive proofs work like an interrogation: Over a series of messages, one party (the prover) tries to convince the other (the verifier) that a given statement is true. An interactive proof must satisfy two properties. First, a true statement will always eventually convince an honest verifier. Second, if the given statement is false, no prover — even one pretending to possess certain knowledge — can convince the verifier, except with negligibly small probability…(More)”

Four ways that AI and robotics are helping to transform other research fields


Article by Michael Eisenstein: “Artificial intelligence (AI) is already proving a revolutionary tool for bioinformatics; the AlphaFold database set up by London-based company DeepMind, owned by Google, is allowing scientists to predict the structures of 200 million proteins across 1 million species. But other fields are benefiting too. Here, we describe the work of researchers pursuing cutting-edge AI and robotics techniques to better anticipate the planet’s changing climate, uncover the hidden history behind artworks, understand deep sea ecology and develop new materials.

Marine biology with a soft touch

It takes a tough organism to withstand the rigours of deep-sea living. But these resilient species are also often remarkably delicate, ranging from soft and squishy creatures such as jellyfish and sea cucumbers, to firm but fragile deep-sea fishes and corals. Their fragility makes studying these organisms a complex task.

The rugged metal manipulators found on many undersea robots are more likely to harm such specimens than to retrieve them intact. But ‘soft robots’ based on flexible polymers are giving marine biologists such as David Gruber, of the City University of New York, a gentler alternative for interacting with these enigmatic denizens of the deep…(More)”.

New Horizons in Digital Anthropology


Report by UNESCO and the LiiV Center: “Digitisation, social networks, artificial intelligence, and the metaverse are changing what it means to be human. Humans and technology are now in a dynamic and reciprocal relationship. However, while society has invested trillions in building and tracking digital platforms and personal data, we’ve invested a shockingly small amount in understanding the values, social dynamics, identities, and biases of digital communities.

We can’t address transformations in one without understanding the impacts on the other. Handling growing global challenges such as the spread of misinformation, the rise of social and political polarisation, the mental health crisis, the expansion of digital surveillance, and growing digital inequalities depends on our ability to gain deeper insights into the relationship between people and digital technologies, and to see and understand people, cultures and communities online. The world depends heavily on economics and data science when it comes to understanding digital impacts, but these sciences alone don’t tell the whole story. Economic models are built for scale but struggle with depth. Furthermore, experience shows us that over-reliance on one-dimensional approaches magnifies social biases and ethical blind spots.

Digital Anthropology focuses on this intersection between technology and humans, examining the quantitative and qualitative, using big data and thick data, the virtual and real. While innovation in digital anthropology has started, the field needs more investment and global awareness of its unique and untapped potential to humanise decision-making for leaders across the public and private sectors.

This publication, developed in partnership between UNESCO and the LiiV Center, maps the landscape of innovation in digital anthropology as an approach to ensure a better understanding of how human communities and societies interact and are shaped by technologies and, knowing this, how policies can be rendered more ethical and inclusive.

Briefly, the research found that innovation in digital anthropology is in a state of transition and is perceived differently across sectors and regions. In the span of just a couple of decades, innovation has come from doing anthropology digitally and doing the digital anthropologically, two movements that give life to space where creation happens within the blurry lines among disciplines, fuelled by increasingly fluid movement between academia and the private sector.

The innovation space in-between these trends seem to be where the most exciting and forward-thinking digital innovations are occurring, like novel blended algorithms or computational and techno-anthropology, and opens opportunities to educate a new breed of digitally and anthropologically skilled professionals…(More)”.

The Transformations of Science


Essay by Geoff Anders: “In November of 1660, at Gresham College in London, an invisible college of learned men held their first meeting after 20 years of informal collaboration. They chose their coat of arms: the royal crown’s three lions of England set against a white backdrop. Their motto: “Nullius in verba,” or “take no one’s word for it.” Three years later, they received a charter from King Charles II and became what was and remains the world’s preeminent scientific institution: the Royal Society.

Three and a half centuries later, in July of 2021, even respected publications began to grow weary of a different, now constant refrain: “Trust the science.” It was a mantra everyone was supposed to accept, repeated again and again, ad nauseum

This new motto was the latest culmination of a series of transformations science has undergone since the founding of the Royal Society, reflecting the changing nature of science on one hand, and its expanding social role on the other. 

The present world’s preeminent system of thought now takes science as a central pillar and wields its authority to great consequence. But the story of how that came to be is, as one might expect, only barely understood…

There is no essential conflict between the state’s use of the authority of science and the health of the scientific enterprise itself. It is easy to imagine a well-funded and healthy scientific enterprise whose authority is deployed appropriately for state purposes without undermining the operation of science itself.

In practice, however, there can be a tension between state aims and scientific aims, where the state wants actionable knowledge and the imprimatur of science, often far in advance of the science getting settled. This is especially likely in response to a disruptive phenomenon that is too new for the science to have settled yet—for example, a novel pathogen with unknown transmission mechanisms and health effects.

Our recent experience of the pandemic put this tension on display, with state recommendations moving against masks, and then for masks, as the state had to make tactical decisions about a novel threat with limited information. In each case, politicians sought to adorn the recommendations with the authority of settled science; an unfortunate, if understandable, choice.

This joint partnership of science and the state is relatively new. One question worth asking is whether the development was inevitable. Science had an important flaw in its epistemic foundation, dating back to Boyle and the Royal Society—its failure to determine the proper conditions and use of scientific authority. “Nullius in verba” made some sense in 1660, before much science was settled and when the enterprise was small enough that most natural philosophers could personally observe or replicate the experiments of the others. It came to make less sense as science itself succeeded, scaled up, and acquired intellectual authority. Perhaps a better answer to the question of scientific authority would have led science to take a different course.

Turning from the past to the future, we now face the worrying prospect that the union of science and the state may have weakened science itself. Some time ago, commentators raised the specter of scientific slowdown, and more recent analysis has provided further justification for these fears. Why is science slowing? To put it simply, it may be difficult to have science be both authoritative and exploratory at the same time.

When scientists are meant to be authoritative, they’re supposed to know the answer. When they’re exploring, it’s okay if they don’t. Hence, encouraging scientists to reach authoritative conclusions prematurely may undermine their ability to explore—thereby yielding scientific slowdown. Such a dynamic may be difficult to detect, since the people who are supposed to detect it might themselves be wrapped up in a premature authoritative consensus…(More)”.

Can critical policy studies outsmart AI? Research agenda on artificial intelligence technologies and public policy


Paper by Regine Paul: “The insertion of artificial intelligence technologies (AITs) and data-driven automation in public policymaking should be a metaphorical wake-up call for critical policy analysts. Both its wide representation as techno-solutionist remedy in otherwise slow, inefficient, and biased public decision-making and its regulation as a matter of rational risk analysis are conceptually flawed and democratically problematic. To ‘outsmart’ AI, this article stimulates the articulation of a critical research agenda on AITs and public policy, outlining three interconnected lines of inquiry for future research: (1) interpretivist disclosure of the norms and values that shape perceptions and uses of AITs in public policy, (2) exploration of AITs in public policy as a contingent practice of complex human-machine interactions, and (3) emancipatory critique of how ‘smart’ governance projects and AIT regulation interact with (global) inequalities and power relations…(More)”.

Why Funders Should Go Meta


Paper by Stuart Buck & Anna Harvey: “We don’t mean the former Facebook. Rather, philanthropies should prefer to fund meta-issues—i.e., research and evaluation, along with efforts to improve research quality. In many cases, it would be far more impactful than what they are doing now.

This is true at two levels.

First, suppose you want to support a certain cause–economic development in Africa, or criminal justice reform in the US, etc. You could spend millions or even billions on that cause.

But let’s go meta: a force multiplier would be funding high-quality research on what works on those issues. If you invest significantly in social and behavioral science research, you might find innumerable ways to improve on the existing status quo of donations.

Instead of only helping the existing nonprofits who seek to address economic development or criminal justice reform, you’d be helping to figure out what works and what doesn’t. The result could be a much better set of investments for all donors.

Perhaps some of your initial ideas end up not working, when exhaustively researched. At worst, that’s a temporary embarrassment, but it’s actually all for the better—now you and others know to avoid wasting more money on those ideas. Perhaps some of your favored policies are indeed good ideas (e.g., vaccination), but don’t have anywhere near enough take-up by the affected populations. Social and behavioral science research (as in the Social Science Research Council’s Mercury Project) could help find cost-effective ways to solve that problem…(More)”.

Building the analytic capacity to support critical technology strategy


Paper by Erica R.H. Fuchs: “Existing federal agencies relevant to the science and technology enterprise are appropriately focused on their missions, but the U.S. lacks the intellectual foundations, data infrastructure, and analytics to identify opportunities where the value of investment across missions (e.g., national security, economic prosperity, social well-being) is greater than the sum of its parts.

The U.S. government lacks systematic mechanisms to assess the nation’s strengths, weaknesses, and opportunities in technology and to assess the long chain of suppliers involved in producing products critical to national missions.

Two examples where modern data and analytics—leveraging star interdisciplinary talent from across the nation—and a cross-mission approach could transform outcomes include 1) the difficulties the federal government had in facilitating the production and distribution of personal protective equipment in spring 2020, and 2) the lack of clarity about the causes and solutions to the semiconductor shortage. Going forward, the scale-up of electric vehicles promises similar challenges…

The critical technology analytics (CTA) would identify 1) how emerging technologies and institutional innovations could potentially transform timely situational awareness of U.S. and global technology capabilities, 2) opportunities for innovation to transform U.S. domestic and international challenges, and 3) win-win opportunities across national missions. The program would be strategic and forward-looking, conducting work on a timeline of months and years rather than days and weeks, and would seek to generalize lessons from individual cases to inform the data and analytics capabilities that the government needs to build to support cross-mission critical technology policy…(More)”.

The case for lotteries as a tiebreaker of quality in research funding


Editorial at Nature: “Earlier this month, the British Academy, the United Kingdom’s national academy for humanities and social sciences, introduced an innovative process for awarding small research grants. The academy will use the equivalent of a lottery to decide between funding applications that its grant-review panels consider to be equal on other criteria, such as the quality of research methodology and study design.

Using randomization to decide between grant applications is relatively new, and the British Academy is in a small group of funders to trial it, led by the Volkswagen Foundation in Germany, the Austrian Science Fund and the Health Research Council of New Zealand. The Swiss National Science Foundation (SNSF) has arguably gone the furthest: it decided in late 2021 to use randomization in all tiebreaker cases across its entire grant portfolio of around 880 million Swiss francs (US$910 million).

Other funders should consider whether they should now follow in these footsteps. That’s because it is becoming clear that randomization is a fairer way to allocate grants when applications are too close to call, as a study from the Research on Research Institute in London shows (see go.nature.com/3s54tgw). Doing so would go some way to assuage concerns, especially in early-career researchers and those from historically marginalized communities, about the lack of fairness when grants are allocated using peer review.

The British Academy/Leverhulme small-grants scheme distributes around £1.5 million (US$1.7 million) each year in grants of up to £10,000 each. These are valuable despite their relatively small size, especially for researchers starting out. The academy’s grants can be used only for direct research expenses, but small grants are also typically used to fund conference travel or to purchase computer equipment or software. Funders also use them to spot promising research talent for future (or larger) schemes. For these reasons and more, small grants are competitive — the British Academy says it is able to fund only 20–30% of applications in each funding round…(More)”.

Learning to Share: Lessons on Data-Sharing from Beyond Social Media


Paper by CDT: “What role has social media played in society? Did it influence the rise of Trumpism in the U.S. and the passage of Brexit in the UK? What about the way authoritarians exercise power in India or China? Has social media undermined teenage mental health? What about its role in building social and community capital, promoting economic development, and so on?

To answer these and other important policy-related questions, researchers such as academics, journalists, and others need access to data from social media companies. However, this data is generally not available to researchers outside of social media companies and, where it is available, it is often insufficient, meaning that we are left with incomplete answers.

Governments on both sides of the Atlantic have passed or proposed legislation to address the problem by requiring social media companies to provide certain data to vetted researchers (Vogus, 2022a). Researchers themselves have thought a lot about the problem, including the specific types of data that can further public interest research, how researchers should be vetted, and the mechanisms companies can use to provide data (Vogus, 2022b).

For their part, social media companies have sanctioned some methods to share data to certain types of researchers through APIs (e.g., for researchers with university affiliations) and with certain limitations (such as limits on how much and what types of data are available). In general, these efforts have been insufficient. In part, this is due to legitimate concerns such as the need to protect user privacy or to avoid revealing company trade secrets.  But, in some cases, the lack of sharing is due to other factors such as lack of resources or knowledge about how to share data effectively or resistance to independent scrutiny.

The problem is complex but not intractable. In this report, we look to other industries where companies share data with researchers through different mechanisms while also addressing concerns around privacy. In doing so, our analysis contributes to current public and corporate discussions about how to safely and effectively share social media data with researchers. We review experiences based on the governance of clinical trials, electricity smart meters, and environmental impact data…(More)”

Co-Producing Sustainability Research with Citizens: Empirical Insights from Co-Produced Problem Frames with Randomly Selected Citizens


Paper by Mareike Blum: “In sustainability research, knowledge co-production can play a supportive role at the science-policy interface (Norström et al., 2020). However, so far most projects involved stakeholders in order to produce ‘useful knowledge’ for policy-makers. As a novel approach, research projects have integrated randomly selected citizens during the knowledge co-production to make policy advice more reflective of societal perspectives and thereby increase its epistemic quality. Researchers are asked to consider citizens’ beliefs and values and integrate these in their ongoing research. This approach rests on pragmatist philosophy, according to which a joint deliberation on value priorities and anticipated consequences of policy options ideally allows to co-develop sustainable and legitimate policy pathways (Edenhofer & Kowarsch, 2015; Kowarsch, 2016). This paper scrutinizes three promises of involving citizens in the problem framing: (1) creating input legitimacy, (2) enabling social learning among citizens and researchers and (3) resulting in high epistemic quality of the co-produced knowledge. Based on empirical data the first phase of two research projects in Germany were analysed and compared: The Ariadne research project on the German Energy Transition, and the Biesenthal Forest project at the local level in Brandenburg, Germany. We found that despite barriers exist; learning was enabled by confronting researchers with problem perceptions of citizens. The step when researchers interpret and translate problem frames in the follow-up knowledge production is most important to assess learning and epistemic quality…(More)”.