Living Labs for Public Sector Innovation: An Integrative Literature Review


Paper by Lars Fuglsang, Anne Vorre Hansen, Ines Mergel, and Maria Taivalsaari Røhnebæk: “The public administration literature and adjacent fields have devoted increasing attention to living labs as environments and structures enabling the co-creation of public sector innovation. However, living labs remain a somewhat elusive concept and phenomenon, and there is a lack of understanding of its versatile nature. To gain a deeper understanding of the multiple dimensions of living labs, this article provides a review assessing how the environments, methods, and outcomes of living labs are addressed in the extant research literature. The findings are drawn together in a model synthesizing how living labs link to public sector innovation, followed by an outline of knowledge gaps and future research avenues….(More)”.

The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action


Paper by Ben Green: “Recent controversies related to topics such as fake news, privacy, and algorithmic bias have prompted increased public scrutiny of digital technologies and soul-searching among many of the people associated with their development. In response, the tech industry, academia, civil society, and governments have rapidly increased their attention to “ethics” in the design and use of digital technologies (“tech ethics”). Yet almost as quickly as ethics discourse has proliferated across the world of digital technologies, the limitations of these approaches have also become apparent: tech ethics is vague and toothless, is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production. As a result of these limitations, many have grown skeptical of tech ethics and its proponents, charging them with “ethics-washing”: promoting ethics research and discourse to defuse criticism and government regulation without committing to ethical behavior. By looking at how ethics has been taken up in both science and business in superficial and depoliticizing ways, I recast tech ethics as a terrain of contestation where the central fault line is not whether it is desirable to be ethical, but what “ethics” entails and who gets to define it. This framing highlights the significant limits of current approaches to tech ethics and the importance of studying the formulation and real-world effects of tech ethics. In order to identify and develop more rigorous strategies for reforming digital technologies and the social relations that they mediate, I describe a sociotechnical approach to tech ethics, one that reflexively applies many of tech ethics’ own lessons regarding digital technologies to tech ethics itself….(More)”

Social-Tech Entrepreneurs: Building Blocks of a New Social Economy


Article by Mario Calderini, Veronica Chiodo, Francesco Gerli & Giulio Pasi: “Is it possible to create a sustainable, human-centric, resilient economy that achieves diverse objectives—including growth, inclusion, and equity? Could industry provide prosperity beyond jobs and economic growth, by adopting societal well-being as a compass to inform the production of goods and services?

The policy brief “Industry 5.0,” recently released by the European Commission, seems to reply positively. It makes the case for conceiving economic growth as a means to inclusive prosperity. It is also an invitation to rethink the role of industry in society, and reprioritize policy targets and tools

The following reflection, based on insights gathered from empirical research, is a first attempt to elaborate on how we might achieve this rethinking, and aims to contribute to the social economy debate in Europe and beyond.

A New Entrepreneurial Genre

A new entrepreneurial genre forged by the values of social entrepreneurship and fueled by technological opportunities is emerging, and it is well-poised to mend the economic and social wounds inflicted by both COVID-19 and the unexpected consequences of the early knowledge economy—an economy built around ideas and intellectual capital, and driven by diffused creativity, technology, and innovation.

We believe this genre, which we call social-tech entrepreneurship, is important to inaugurating a new generation of place-based, innovation-driven development policies inspired by a more inclusive idea of growth—though under the condition that industrial and innovation policies include it in their frame of reference.

This is partly because social innovation has undergone a complex transformation in recent years. It has seen a hybridization of social and commercial objectives and, as a direct consequence, new forms of management that support organizational missions that blend the two. Today, a more recent trend, reinforced by the pandemic, might push this transformation further: the idea that technologies—particularly those commoditized in the digital and software domains—offer a unique opportunity to solve societal challenges at scale.

Social-tech entrepreneurship differs from the work of high-tech companies in that, as researchers Geoffrey Desa and Suresh Kotha explain, it specifically aims to “develop and deploy technology-driven solutions to address social needs.” A social-tech entrepreneur also leverages technology not just to make parts of their operations more efficient, but to prompt a disruptive change in the way a specific social problem is addressed—and in a way that safeguards economic sustainability. In other words, they attempt to satisfy a social need through technological innovation in a financially sustainable manner. …(More)”.

Are Repeat Nudges Effective? For Tardy Tax Filers, It Seems So


Paper by Nicole Robitaille, Nina Mažar, and Julian House: “While behavioral scientists sometimes aim to nudge one-time actions, such as registering as an organ donor or signing up for a 401K, there are many other behaviors—making healthy food choices, paying bills, filing taxes, getting a flu shot—that are repeated on a daily, monthly, or annual basis. If you want to target these recurrent behaviors, can introducing a nudge once lead to consistent changes in behavior? What if you presented the same nudge several times—would seeing it over and over make its effects stronger, or just the opposite?

Decades of research from behavioral science has taught us a lot about nudges, but the field as a whole still doesn’t have a great understanding of the temporal dimensions of most interventions, including how long nudge effects last and whether or not they remain effective when repeated.

If you want an intervention to lead to lasting behavior change, prior research argues that it should target people’s beliefs, habits or the future costs of engaging in the behavior. Many nudges, however, focus instead on manipulating relatively small factors in the immediate choice environment to influence behavior, such as changing the order in which options are presented. In addition, relatively few field experiments have been able to administer and measure an intervention’s effects more than once, making it hard to know how long the effects of nudges are likely to persist.

While there is some research on what to expect when repeating nudges, the results are mixed. On the one hand, there is an extensive body of research in psychology on habituation, finding that, over time, people show decreased responses to the same stimuli. It wouldn’t be a giant leap to presume that seeing the same nudge again might decrease how much attention we pay to it, and thus hinder its ability to change our behavior. On the other hand, being exposed to the same nudge multiple times might help strengthen desired associations. Research on the mere exposure effect, for example, illustrates how the more times we see something, the more easily it is processed and the more we like it. It is also possible that being nudged multiple times could help foster enduring change, such as through new habit formation. Behavioral nudges aren’t going away, and their use will likely grow among policymakers and practitioners. It is critical to understand the temporal dimensions of these interventions, including how long one-off effects will last and if they will continue to be effective when seen multiple times….(More)”

Help us identify how data can make food healthier for us and the environment


The GovLab: “To make food production, distribution, and consumption healthier for people, animals, and the environment, we need to redesign today’s food systems. Data and data science can help us develop sustainable solutions — but only if we manage to define those questions that matter.

Globally, we are witnessing the damage that unsustainable farming practices have caused on the environment. At the same time, climate change is making our food systems more fragile, while the global population continues to rapidly increase. To feed everyone, we need to become more sustainable in our approach to producing, consuming, and disposing of food.

Policymakers and stakeholders need to work together to reimagine food systems and collectively make them more resilient, healthy, and inclusive.

Data will be integral to understanding where failures and vulnerabilities exist and what methods are needed to rectify them. Yet, the insights generated from data are only as good as the questions they seek to answer. To become smarter about current and future food systems using data, we need to ask the right questions first.

That’s where The 100 Questions Initiative comes in. It starts from the premise that to leverage data in a responsible and effective manner, data initiatives should be driven by demand, not supply. Working with a global cohort of experts, The 100 Questions seeks to map the most pressing and potentially impactful questions that data and data science can answer.

Today the Barilla Foundation, the Center for European Policy Studies, and The Governance Lab at NYU Tandon School of Engineering, are announcing the launch of the Food Systems Sustainability domain of The 100 Questions. We seek to identify the 10 most important questions that need to be answered to make food systems more sustainable…(More)”.

Collective data rights can stop big tech from obliterating privacy


Article by Martin Tisne: “…There are two parallel approaches that should be pursued to protect the public.

One is better use of class or group actions, otherwise known as collective redress actions. Historically, these have been limited in Europe, but in November 2020 the European parliament passed a measure that requires all 27 EU member states to implement measures allowing for collective redress actions across the region. Compared with the US, the EU has stronger laws protecting consumer data and promoting competition, so class or group action lawsuits in Europe can be a powerful tool for lawyers and activists to force big tech companies to change their behavior even in cases where the per-person damages would be very low.

Class action lawsuits have most often been used in the US to seek financial damages, but they can also be used to force changes in policy and practice. They can work hand in hand with campaigns to change public opinion, especially in consumer cases (for example, by forcing Big Tobacco to admit to the link between smoking and cancer, or by paving the way for car seatbelt laws). They are powerful tools when there are thousands, if not millions, of similar individual harms, which add up to help prove causation. Part of the problem is getting the right information to sue in the first place. Government efforts, like a lawsuit brought against Facebook in December by the Federal Trade Commission (FTC) and a group of 46 states, are crucial. As the tech journalist Gilad Edelman puts it, “According to the lawsuits, the erosion of user privacy over time is a form of consumer harm—a social network that protects user data less is an inferior product—that tips Facebook from a mere monopoly to an illegal one.” In the US, as the New York Times recently reported, private lawsuits, including class actions, often “lean on evidence unearthed by the government investigations.” In the EU, however, it’s the other way around: private lawsuits can open up the possibility of regulatory action, which is constrained by the gap between EU-wide laws and national regulators.

Which brings us to the second approach: a little-known 2016 French law called the Digital Republic Bill. The Digital Republic Bill is one of the few modern laws focused on automated decision making. The law currently applies only to administrative decisions taken by public-sector algorithmic systems. But it provides a sketch for what future laws could look like. It says that the source code behind such systems must be made available to the public. Anyone can request that code.

Importantly, the law enables advocacy organizations to request information on the functioning of an algorithm and the source code behind it even if they don’t represent a specific individual or claimant who is allegedly harmed. The need to find a “perfect plaintiff” who can prove harm in order to file a suit makes it very difficult to tackle the systemic issues that cause collective data harms. Laure Lucchesi, the director of Etalab, a French government office in charge of overseeing the bill, says that the law’s focus on algorithmic accountability was ahead of its time. Other laws, like the European General Data Protection Regulation (GDPR), focus too heavily on individual consent and privacy. But both the data and the algorithms need to be regulated…(More)”

A fair data economy is built upon collaboration


Report by Heli Parikka, Tiina Härkönen and Jaana Sinipuro: “For a human-driven and fair data economy to work, it must be based on three important and interconnected aspects: regulation based on ethical values; technology; and new kinds of business models. With a human-driven approach, individual and social interests determine the business conditions and data is used to benefit individuals and society.

When developing a fair data economy, the aim has been to use existing technologies, operating models and concepts across the boundaries between different sectors. The goal is to enable not only new data-based business but also easier digital everyday life that is based on the more efficient and personal management of data. The human-driven approach is closely linked to the MyData concept.

At the beginning of the IHAN project, there were very few easy-to-use, individually tailored digital services. For example, the most significant data-based consumer services were designed on the basis of the needs of large corporations. To create demand, prevailing mindsets had to be changed and decision-makers needed to be encouraged to change direction, companies had to find new business with new business models and individuals had to be persuaded to demand change.

The terms and frameworks of the platform and data economies needed further clarification for the development of a fair data economy. We sought out examples from other sectors and found that, in addition to “human-driven”, another defining concept that emerged was “fair”, with fairness defined as a key goal in the IHAN project. A fair model also takes financial aspects into account and recognises the significance of companies and new services as a source of well-being.

Why did Sitra want to tackle this challenge to begin with? What had thus far been available to people was an unfair data economy model, which needed to be changed. The data economy direction had been defined by a handful of global companies, whose business models are based on collecting and managing data on their own platforms and on their own terms. There was a need to develop an alternative, a European data economy model.

One of the tasks of the future fund is to foresee future trends, the fair and human-driven use of data being one of them. The objective was to approach the theme in a pluralistic manner from the perspectives of different participants in society. Sitra’s unique position as an independent future fund made it possible to launch the project.

A fair data economy has become one of Sitra’s strategic spearheads and a new theme is being prepared at the time of the writing of this publication. The lessons learned and tools created so far will be moved under that theme and developed further, making them available to everyone who needs them….(More)“.

Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda


Paper by Anneke Zuiderwijk, Yu-Che Chen and Fadi Salem: “To lay the foundation for the special issue that this research article introduces, we present 1) a systematic review of existing literature on the implications of the use of Artificial Intelligence (AI) in public governance and 2) develop a research agenda. First, an assessment based on 26 articles on this topic reveals much exploratory, conceptual, qualitative, and practice-driven research in studies reflecting the increasing complexities of using AI in government – and the resulting implications, opportunities, and risks thereof for public governance. Second, based on both the literature review and the analysis of articles included in this special issue, we propose a research agenda comprising eight process-related recommendations and seven content-related recommendations. Process-wise, future research on the implications of the use of AI for public governance should move towards more public sector-focused, empirical, multidisciplinary, and explanatory research while focusing more on specific forms of AI rather than AI in general. Content-wise, our research agenda calls for the development of solid, multidisciplinary, theoretical foundations for the use of AI for public governance, as well as investigations of effective implementation, engagement, and communication plans for government strategies on AI use in the public sector. Finally, the research agenda calls for research into managing the risks of AI use in the public sector, governance modes possible for AI use in the public sector, performance and impact measurement of AI use in government, and impact evaluation of scaling-up AI usage in the public sector….(More)”.

Did the GDPR increase trust in data collectors? Evidence from observational and experimental data


Paper by Paul C. Bauer, Frederic Gerdon, Florian Keusch, Frauke Kreuter & David Vannette: “In the wake of the digital revolution and connected technologies, societies store an ever-increasing amount of data on humans, their preferences, and behavior. These modern technologies create a trust challenge, insofar as individuals have to trust data collectors such as private organizations, government institutions, and researchers that their data is not misused. Privacy regulations should increase trust because they provide laws that increase transparency and allow for punishment in cases in which the trustee violates trust. The introduction of the General Data Protection Regulation (GDPR) in May 2018 – a wide-reaching regulation in EU law on data protection and privacy that covers millions of individuals in Europe – provides a unique setting to study the impact of privacy regulation on trust in data collectors. We collected survey panel data in Germany around the implementation date and ran a survey experiment with a GDPR information treatment. Our observational and experimental evidence does not support the hypothesis that the GDPR has positively affected trust. This finding and our discussion of the underlying reasons are relevant for the wider research field of trust, privacy, and big data….(More)”

A growing number of governments hope to clone America’s DARPA


The Economist: “Using messenger RNA to make vaccines was an unproven idea. But if it worked, the technique would revolutionise medicine, not least by providing protection against infectious diseases and biological weapons. So in 2013 America’s Defence Advanced Research Projects Agency (DARPA) gambled. It awarded a small, new firm called Moderna $25m to develop the idea. Eight years, and more than 175m doses later, Moderna’s covid-19 vaccine sits on the list of innovations for which DARPA can claim at least partial credit, alongside weather satellites, GPS, drones, stealth technology, voice interfaces, the personal computer and the internet.

It is the agency that shaped the modern world, and this success has spurred imitators. In America there are ARPAs for homeland security, intelligence and energy, as well as the original defence one. President Joe Biden has asked Congress for $6.5bn to set up a health version, which will, the president vows, “end cancer as we know it”. His administration also has plans for another, to tackle climate change. Germany has recently established two such agencies: one civilian (the Federal Agency for Disruptive Innovation, or SPRIN-D) and another military (the Cybersecurity Innovation Agency). Japan’s version is called Moonshot R&D. In Britain, a bill for an Advanced Research and Invention Agency—often referred to as UK ARPA—is making its way through parliament….(More)”.