Nudges: Four reasons to doubt popular technique to shape people’s behavior


Article by Magda Osman: “Throughout the pandemic, many governments have had to rely on people doing the right thing to reduce the spread of the coronavirus – ranging from social distancing to handwashing. Many enlisted the help of psychologists for advice on how to “nudge” the public to do what was deemed appropriate.

Nudges have been around since the 1940s and originally were referred to as behavioural engineering. They are a set of techniques developed by psychologists to promote “better” behaviour through “soft” interventions rather than “hard” ones (mandates, bans, fines). In other words, people aren’t punished if they fail to follow them. The nudges are based on psychological and behavioural economic research into human behaviour and cognition.

The nudges can involve subtle as well as obvious methods. Authorities may set a “better” choice, such as donating your organs, as a default – so people have to opt out of a register rather than opt in. Or they could make a healthy option more attractive through food labelling.

But, despite the soft approach, many people aren’t keen on being nudged. During the pandemic, for example, scientists examined people’s attitudes to nudging in social and news media in the UK, and discovered that half of the sentiments expressed in social media posts were negative…(More)”.

Interoperable, agile, and balanced


Brookings Paper on Rethinking technology policy and governance for the 21st century: “Emerging technologies are shifting market power and introducing a range of risks that can only be managed through regulation. Unfortunately, current approaches to governing technology are insufficient, fragmented, and lack the focus towards actionable goals. This paper proposes three tools that can be leveraged to support fit-for-purpose technology regulation for the 21st century: First, a transparent and holistic policymaking levers that clearly communicate goals and identify trade-offs at the national and international levels; second, revamped efforts to collaborate across jurisdictions, particularly through standard-setting and evidence gathering of critical incidents across jurisdictions; and third, a shift towards agile governance, whether acquired through the system, design, or both…(More)”.

Technology and the Global Struggle for Democracy


Essay by Manuel Muniz: “The commemoration of the first anniversary of the January 6, 2021, attack on the US Capitol by supporters of former President Donald Trump showed that the extreme political polarization that fueled the riot also frames Americans’ interpretations of it. It would, however, be gravely mistaken to view what happened as a uniquely American phenomenon with uniquely American causes. The disruption of the peaceful transfer of power that day was part of something much bigger.

As part of the commemoration, President Joe Biden said that a battle is being fought over “the soul of America.” What is becoming increasingly clear is that this is also true of the international order: its very soul is at stake. China is rising and asserting itself. Populism is widespread in the West and major emerging economies. And chauvinistic nationalism has re-emerged in parts of Europe. All signs point to increasing illiberalism and anti-democratic sentiment around the world.

Against this backdrop, the US hosted in December a (virtual) “Summit for Democracy” that was attended by hundreds of national and civil-society leaders. The message of the gathering was clear: democracies must assert themselves firmly and proactively. To that end, the summit devoted numerous sessions to studying the digital revolution and its potentially harmful implications for our political systems.

Emerging technologies pose at least three major risks for democracies. The first concerns how they structure public debate. Social networks balkanize public discourse by segmenting users into ever smaller like-minded communities. Algorithmically-driven information echo chambers make it difficult to build social consensus. Worse, social networks are not liable for the content they distribute, which means they can allow misinformation to spread on their platforms with impunity…(More)”.

A data ‘black hole’: Europol ordered to delete vast store of personal data


Article by Apostolis Fotiadis, Ludek Stavinoha, Giacomo Zandonini, Daniel Howden: “…The EU’s police agency, Europol, will be forced to delete much of a vast store of personal data that it has been found to have amassed unlawfully by the bloc’s data protection watchdog. The unprecedented finding from the European Data Protection Supervisor (EDPS) targets what privacy experts are calling a “big data ark” containing billions of points of information. Sensitive data in the ark has been drawn from crime reports, hacked from encrypted phone services and sampled from asylum seekers never involved in any crime.

According to internal documents seen by the Guardian, Europol’s cache contains at least 4 petabytes – equivalent to 3m CD-Roms or a fifth of the entire contents of the US Library of Congress. Data protection advocates say the volume of information held on Europol’s systems amounts to mass surveillance and is a step on its road to becoming a European counterpart to the US National Security Agency (NSA), the organisation whose clandestine online spying was revealed by whistleblower Edward Snowden….(More)”.

Are we witnessing the dawn of post-theory science?


Essay by Laura Spinney: “Does the advent of machine learning mean the classic methodology of hypothesise, predict and test has had its day?..

Isaac Newton apocryphally discovered his second law – the one about gravity – after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship – one that could be expressed as an equation, F=ma – and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebook’s machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You can’t lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that – no theory, in a word. They just work and do so well. We witness the social effects of Facebook’s predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were – oversimplifications of reality. Soon, the old scientific method – hypothesise, predict, test – would be relegated to the dustbin of history. We’d stop looking for the causes of things and be satisfied with correlations.

With the benefit of hindsight, we can say that what Anderson saw is true (he wasn’t alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. “We have leapfrogged over our ability to even write the theories that are going to be useful for description,” says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. “We don’t even know what they would look like.”

But Anderson’s prediction of the end of theory looks to have been premature – or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: what’s the best way to acquire knowledge and where does science go from here?…(More)”

A data-based participatory approach for health equity and digital inclusion: prioritizing stakeholders


Paper by Aristea Fotopoulou, Harriet Barratt, and Elodie Marandet: “This article starts from the premise that projects informed by data science can address social concerns, beyond prioritizing the design of efficient products or services. How can we bring the stakeholders and their situated realities back into the picture? It is argued that data-based, participatory interventions can improve health equity and digital inclusion while avoiding the pitfalls of top-down, technocratic methods. A participatory framework puts users, patients and citizens as stakeholders at the centre of the process, and can offer complex, sustainable benefits, which go beyond simply the experience of participation or the development of an innovative design solution. A significant benefit for example is the development of skills, which should not be seen as a by-product of the participatory processes, but a central element of empowering marginalized or excluded communities to participate in public life. By drawing from different examples in various domains, the article discusses what can be learnt from implementations of schemes using data science for social good, human-centric design, arts and wellbeing, to argue for a data-centric, creative and participatory approach to address health equity and digital inclusion in tandem…(More)”.

The Tech That Comes Next


Book by Amy Sample Ward and Afua Brice: “Who is part of technology development, who funds that development, and how we put technology to use all influence the outcomes that are possible. To change those outcomes, we must – all of us – shift our relationship to technology, how we use it, build it, fund it, and more. In The Tech That Comes Next, Amy Sample Ward and Afua Bruce – two leaders in equitable design and use of new technologies – invite you to join them in asking big questions and making change from wherever you are today. 

This book connects ideas and conversations across sectors from artificial intelligence to data collection, community centered design to collaborative funding, and social media to digital divides. Technology and equity are inextricably connected, and The Tech That Comes Next helps you accelerate change for the better….(More)

Data in Collective Impact: Focusing on What Matters


Article by Justin Piff: “One of the five conditions of collective impact, “shared measurement systems,” calls upon initiatives to identify and share key metrics of success that align partners toward a common vision. While the premise that data should guide shared decision-making is not unique to collective impact, its articulation 10 years ago as a necessary condition for collective impact catalyzed a focus on data use across the social sector. In the original article on collective impact in Stanford Social Innovation Review, the authors describe the benefits of using consistent metrics to identify patterns, make comparisons, promote learning, and hold actors accountable for success. While this vision for data collection remains relevant today, the field has developed a more nuanced understanding of how to make it a reality….

Here are four lessons from our work to help collective impact initiatives and their funders use data more effectively for social change.

1. Prioritize the Learning, Not the Data System

Those of us who are “data people” have espoused the benefits of shared data systems and common metrics too many times to recount. But a shared measurement system is only a means to an end, not an end in itself. Too often, new collective impact initiatives focus on creating the mythical, all-knowing data system—spending weeks, months, and even years researching or developing the perfect software that captures, aggregates, and computes data from multiple sectors. They let the perfect become the enemy of the good, as the pursuit of perfect data and technical precision inhibits meaningful action. And communities pay the price.

Using data to solve complex social problems requires more than a technical solution. Many communities in the US have more data than they know what to do with, yet they rarely spend time thinking about the data they actually need. Before building a data system, partners must focus on how they hope to use data in their work and identify the sources and types of data that can help them achieve their goals. Once those data are identified and collected, partners, residents, students, and others can work together to develop a shared understanding of what the data mean and move forward. In Connecticut, the Hartford Data Collaborative helps community agencies and leaders do just this. For example, it has matched programmatic data against Hartford Public Schools data and National Student Clearinghouse data to get a clear picture of postsecondary enrollment patterns across the community. The data also capture services provided to residents across multiple agencies and can be disaggregated by gender, race, and ethnicity to identify and address service gaps….(More)”.

Our Common AI Future – A Geopolitical Analysis and Road Map, for AI Driven Sustainable Development, Science and Data Diplomacy


(Open Access) Book by Francesco Lapenta: “The premise of this concise but thorough book is that the future, while uncertain and open, is not arbitrary, but the result of a complex series of competing decisions, actors, and events that began in the past, have reached a certain configuration in the present, and will continue to develop into the future. These past and present conditions constitute the basis and origin of future developments that have the potential to shape into a variety of different possible, probable, undesirable or desirable future scenarios. The realisation that these future scenarios cannot be totally arbitrary gives scope to the study of the past, indispensable to fully understand the facts and actors and forces that contributed to the formation of the present, and how certain systems, or dominant models, came to be established (I). The relative openness of future scenarios gives scope to the study of what competing forces and models might exist, their early formation, actors, and initiatives (II) and how they may act as catalysts for alternative theories, models (III and IV) and actions that can influence our future and change its path (V)…

The analyses in the book, which are loosely divided into three phases, move from the past to the present, and begin with identifying best practices and some of the key initiatives that have attempted to achieve these global collaborative goals over the last few decades. Then, moving forward, they describe a roadmap to a possible future based on already existing and developing theories, initiatives, and tools that could underpin these global collaborative efforts in the specific areas of AI and Sustainable Development. In the Road Map for AI Driven Sustainable Development, the analyses identify and stand on the shoulders of a number of past and current global initiatives that have worked for decades to lay the groundwork for this alternative evolutionary and collaborative model. The title of this book directs, acknowledges, and encourages readers to engage with one of these pivotal efforts, the “Our Common Future” report, the Brundtland’s commission report which was published in 1987 by the World Commission on Environment and Development (WCED). Building on the report’s ambitious humanistic and socioeconomic landscape and ambitions, the analyses investigate a variety of existing and developing best practices that could lead to, or inspire, a shared scientific collaborative model for AI development. Based on the understanding that, despite political rivalry and competition, governments should collaborate on at least two fundamental issues: One, to establish a set of global “Red Lines” to prohibit the development and use of AIs in specific applications that might pose an ethical or existential threat to humanity and the planet. And two, create a set of “Green Zones” for scientific diplomacy and cooperation in order to capitalize on the opportunities that the impending AIs era may represent in confronting major collective challenges such as the health and climate crises, the energy crisis, and the sustainable development goals identified in the report and developed by other subsequent global initiatives…(More)”.

A time for humble governments


Essay by Juha Leppänen: “Let’s face it. During the last decade, liberal democracies have not been especially successful in steering societies through our urgent, collective problems. This is reflected in the 2021 Edelman Trust Barometer Spring Update: A World in Trauma: Democratic governments are less trusted in general by their own citizens. While some governments have fared better than others, the trend is clear…

Humility entails both a willingness to listen to different opinions, and a capacity to review one’s own actions in light of new insights. True humility does not need to be deferential. But embracing humility legitimises leadership by cultivating stronger relationships and greater trust among other political and societal stakeholders — particularly with those with different perspectives. In doing so, it can facilitate long-term action and ensure policies are much more resilient in the face of uncertainty.

There are several core steps to establishing humble governance:

  • Some common ground is better than none, so strike a thin consensus with the opposition around a broad framework goal. For example, consider carbon neutrality targets. To begin with, forging consensus does not require locking down on the details of how and what. Take emissions in agriculture. In this case all that is needed is general agreement that significant cuts in CO2 emissions in this sector are necessary in order to hit our national net zero goal. While this can be harder in extremely polarised environments, a thin consensus of some sort usually can be built on any problem that is already widely recognised — no matter how small. This is even the case in political environments dominated by populist leaders.
  • Devolve problem-solving systemically. First, set aside hammering out blueprints and focus on issuing a broad launch plan, backed by a robust process for governmental decision-making. Look for intelligent incentives to prompt collaboration. In the carbon neutrality example, this would begin by identifying where the most critical potential tensions or jurisdictional disputes lie. Since local stakeholders tend to want to resolve tensions locally, give them a clear role in the planning. Divide up responsibility for achieving goals across sectors of the economy, identify key stakeholders needed at the table in each sector, and create a procedure for reviewing progress. Collaboration can be incentivised by offering those who participate the ability, say, to influence future regulations, or by penalising those who refuse to take part.
  • Revise framework goals through robust feedback mechanisms. A truly humble government’s steering documents should be seen as living documents, rather than definitive blueprints. There should be regular consultation with stakeholders on progress, and elected representatives should review the progress on the original problem statement and how success is defined. Where needed, the government in power can use this process to decide whether to reopen discussions with the opposition about how to revise the current goals…(More)”.