Report by Alistair Baldwin, and Kelly Shuttleworth: “The government’s ambitious transport plans will falter unless policy makers – ministers, civil servants and other public officials – improve the way they identify and use evidence to inform their decisions.
This report compares the use of evidence in the UK, the Netherlands, Sweden, Germany and New Zealand, and finds that England is an outlier in not having a coordinated transport strategy. This damages both scrutiny and coordination of transport policy.
The government has plans to reform bus services, support cycling, review rail franchising, and invest more than £60 billion in transport projects over the next five years. But these plans are not integrated. The Department for Transport should develop a new strategy integrating different modes of transport, rather than mode by mode, to improve political understanding of trade-offs and scrutiny of policy decisions.
The DfT is a well-resourced department, with significant expertise, responsibilities and a wide array of analysts. But its reliance on economic evidence means other forms of evidence can appear neglected in transport decision making – including social research, evaluation or engineering. Decision makers are often too attached to the importance of the Benefit-Cost Ratio at the expense of other forms of evidence.
The government needs to improve its attitude to evaluation of past projects. There are successes – like the evaluation of the Cycle City Ambition Fund – but they are outnumbered by failures – like the evaluation of projects in the Local Growth Fund. For example, good practice from Highways England should be common across the transport sector, helped by providing dedicated funding to local authorities to properly evaluate projects….(More)”.
Paper by Adam M. Mastroianni et al: “Do conversations end when people want them to? Surprisingly, behavioral science provides no answer to this fundamental question about the most ubiquitous of all human social activities. In two studies of 932 conversations, we asked conversants to report when they had wanted a conversation to end and to estimate when their partner (who was an intimate in Study 1 and a stranger in Study 2) had wanted it to end. Results showed that conversations almost never ended when both conversants wanted them to and rarely ended when even one conversant wanted them to and that the average discrepancy between desired and actual durations was roughly half the duration of the conversation. Conversants had little idea when their partners wanted to end and underestimated how discrepant their partners’ desires were from their own. These studies suggest that ending conversations is a classic “coordination problem” that humans are unable to solve because doing so requires information that they normally keep from each other. As a result, most conversations appear to end when no one wants them to….(More)”.
Book by Stefan Grundmann and Philipp Hacker: “Choice is a key concept of our time. It is a foundational mechanism for every legal order in societies that are, politically, constituted as democracies and, economically, built on the market mechanism. Thus, choice can be understood as an atomic structure that grounds core societal processes. In recent years, however, the debate over the right way to theorise choice—for example, as a rational or a behavioural type of decision making—has intensified. This collection therefore provides an in-depth discussion of the promises and perils of specific types of theories of choice. It shows how the selection of a specific theory of choice can make a difference for concrete legal questions, in particularly in the regulation of the digital economy or in choosing between market, firm, or network.
In its first part, the volume provides an accessible overview of the current debates about rational versus behavioural approaches to theories of choice. The remainder of the book structures the vast landscape of theories of choice along three main types: individual, collective, and organisational decision making. As theories of choice proliferate and become ever more sophisticated, however, the process of choosing an adequate theory of choice becomes increasingly intricate, too. This volume addresses this selection problem for the various legal arenas in which individual, organisational, and collective decisions matter. By drawing on economic, technological, political, and legal points of view, the volume shows which theories of choice are at the disposal of the legally relevant decision maker, and how they can be implemented for the solution of concrete legal problems….(More)“
European Parliament Think Tank: “Given the central role that online platforms (OPs) play in the digital economy, questions arise about their responsibility in relation to illegal/harmful content or products hosted in the frame of their operation. Against this background, this study reviews the main legal/regulatory challenges associated with OP operations and analyses the incentives for OPs, their users and third parties to detect and remove illegal/harmful and dangerous material, content and/or products. To create a functional classification which can be used for regulatory purposes, it discusses the notion of OPs and attempts to categorise them under multiple criteria. The study then maps and critically assesses the whole range of OP liabilities, taking hard and soft law, self-regulation and national legislation into consideration, whenever relevant. Finally, the study puts forward policy options for an efficient EU liability regime: (i) maintaining the status quo; (ii) awareness-raising and media literacy; (iii)promoting self-regulation; (iv) establishing co-regulation mechanisms and tools; (v) adoptingstatutory legislation; (vi) modifying OPs’ secondaryliability by employing two different models – (a) byclarifying the conditions for liability exemptionsprovided by the e-Commerce Directive or (b) byestablishing a harmonised regime of liability….(More)”.
Essay by Yuval Noah Harari in the Financial Times: “…The Covid year has exposed an even more important limitation of our scientific and technological power. Science cannot replace politics. When we come to decide on policy, we have to take into account many interests and values, and since there is no scientific way to determine which interests and values are more important, there is no scientific way to decide what we should do.
For example, when deciding whether to impose a lockdown, it is not sufficient to ask: “How many people will fall sick with Covid-19 if we don’t impose the lockdown?”. We should also ask: “How many people will experience depression if we do impose a lockdown? How many people will suffer from bad nutrition? How many will miss school or lose their job? How many will be battered or murdered by their spouses?”
Even if all our data is accurate and reliable, we should always ask: “What do we count? Who decides what to count? How do we evaluate the numbers against each other?” This is a political rather than scientific task. It is politicians who should balance the medical, economic and social considerations and come up with a comprehensive policy.
Similarly, engineers are creating new digital platforms that help us function in lockdown, and new surveillance tools that help us break the chains of infection. But digitalisation and surveillance jeopardise our privacy and open the way for the emergence of unprecedented totalitarian regimes. In 2020, mass surveillance has become both more legitimate and more common. Fighting the epidemic is important, but is it worth destroying our freedom in the process? It is the job of politicians rather than engineers to find the right balance between useful surveillance and dystopian nightmares.
Three basic rules can go a long way in protecting us from digital dictatorships, even in a time of plague. First, whenever you collect data on people — especially on what is happening inside their own bodies — this data should be used to help these people rather than to manipulate, control or harm them. My personal physician knows many extremely private things about me. I am OK with it, because I trust my physician to use this data for my benefit. My physician shouldn’t sell this data to any corporation or political party. It should be the same with any kind of “pandemic surveillance authority” we might establish….(More)”.
Essay by Sun-ha Hong: “In a society beset with black-boxed algorithms and vast surveillance systems, transparency is often hailed as liberal democracy’s superhero. It’s a familiar story: inject the public with information to digest, then await their rational deliberation and improved decision making. Whether in discussions of facial recognition software or platform moderation, we run into the argument that transparency will correct the harmful effects of algorithmic systems. The trouble is that in our movies and comic books, superheroes are themselves deus ex machina: black boxes designed to make complex problems disappear so that the good guys can win. Too often, transparency is asked to save the day on its own, under the assumption that disinformation or abuse of power can be shamed away with information.
Transparency without adequate support, however, can quickly become fuel for speculation and misunderstanding….
All this is part of a broader pattern in which the very groups who should be held accountable by the data tend to be its gatekeepers. Facebook is notorious for transparency-washing strategies, in which it dangles data access like a carrot but rarely follows through in actually delivering it. When researchers worked to create more independent means of holding Facebook accountable — as New York University’s Ad Observatory did last year, using volunteer researchers to build a public database of ads on the platform — Facebook threatened to sue them. Despite the lofty rhetoric around Facebook’s Oversight Board (often described as a “Supreme Court” for the platform), it falls into the same trap of transparency without power: the scope is limited to individual cases of content moderation, with no binding authority over the company’s business strategy, algorithmic design, or even similar moderation cases in the future.
Here, too, the real bottleneck is not information or technology, but power: the legal, political and economic pressure necessary to compel companies like Facebook to produce information and to act on it. We see this all too clearly when ordinary people do take up this labour of transparency, and attempt to hold technological systems accountable. In August 2020, Facebook users reported the Kenosha Guard group more than 400 times for incitement of violence. But Facebook declined to take any action until an armed shooter travelled to Kenosha, Wisconsin, and killed two protesters. When transparency is compromised by the concentration of power, it is often the vulnerable who are asked to make up the difference — and then to pay the price.
Transparency cannot solve our problems on its own. In his book The Rise of the Right to Know, journalism scholar Michael Schudson argues that transparency is better understood as a “secondary or procedural morality”: a tool that only becomes effective by other means. We must move beyond the pernicious myth of transparency as a universal solution, and address the distribution of economic and political power that is the root cause of technologically amplified irrationality and injustice….(More)”.
Book edited by Ann Blair, Paul Duguid, Anja-Silvia Goeing, and Anthony Grafton: “Thanks to modern technological advances, we now enjoy seemingly unlimited access to information. Yet how did information become so central to our everyday lives, and how did its processing and storage make our data-driven era possible? This volume is the first to consider these questions in comprehensive detail, tracing the global emergence of information practices, technologies, and more, from the premodern era to the present. With entries spanning archivists to algorithms and scribes to surveilling, this is the ultimate reference on how information has shaped and been shaped by societies.
Written by an international team of experts, the book’s inspired and original long- and short-form contributions reconstruct the rise of human approaches to creating, managing, and sharing facts and knowledge. Thirteen full-length chapters discuss the role of information in pivotal epochs and regions, with chief emphasis on Europe and North America, but also substantive treatment of other parts of the world as well as current global interconnections. More than 100 alphabetical entries follow, focusing on specific tools, methods, and concepts—from ancient coins to the office memo, and censorship to plagiarism. The result is a wide-ranging, deeply immersive collection that will appeal to anyone drawn to the story behind our modern mania for an informed existence….(More)”.
About: “The Copenhagen Manual is a helping hand for those who are in a position to further data-informed strategies for public sector development or have been given the responsibility for preparing, analysing or communicating a survey on public sector innovation.
Like other instruction manuals, the Copenhagen Manual offers examples of use, handy tips and general warnings. The manual discusses setting strategic goals, communication, reaching respondents, adapting the questionnaire and defining public sector innovation.
Internationally comparable data
The manual offers an opportunity to mirror public sector innovation capacity by way of internationally comparable data. The Copenhagen Manual, with emphasis on the ‘open’ in Copenhagen is:
- the result of an open co-creation process that welcomed the participation of all interested parties
- based on the open sharing of a multitude of experiences, good and bad
- open to interpretation, making it usable in different national contexts and open to continuous discussion of added practical experience as actors from more countries conduct surveys on public sector innovation…(More)”.
Paper by Daniel Innerarity: “Democracy is possible because of an increase in the complexity of society, but that same complexity seems to threaten democracy. There is a clear imbalance between people’s actual competence and the expectation that citizens in a democratic society will be politically competent. It is not only that society has become more complex but that democratization itself increases the degree of social complexity. This unintelligibility can be overcome through the acquisition of some political competence—such as improving individual knowledge, diverse strategies for simplification or recourse to the experts—that partially reduce this imbalance. My hypothesis is that despite the attraction of de-democratizing procedures, the best solutions are those that are most democratic: strengthening the cooperation and the institutional organization of collective intelligence. The purpose of this article is not to solve all the problems I touch on, but rather to examine how they are related and to provide a general framework for the problem of de-democratization through misunderstanding….(More)”.
Book by Dilip Soman and Catherine Yeung: “…This edited volume represents the first output from this international partnership. The book is designed to reflect our conceptual thinking, outline some early results from the partnership and an agenda for research and practice, and provide roadmaps to help both practitioners and academics converge in the common quest of developing behaviorally informed organizations. The book is divided into four parts.
In Part 1, “The Behaviorally Informed Organization,” four chapters lay out an agenda for what such an organization should be and could be. In chapter 1, Soman talks about the science of using behavioral science by developing a brief history of the field of behavioral science, outlining organizational realities, and generating a research agenda to help develop BIOrgs. In chapter 2, Feng and colleagues further develop an understanding of organizational realities and outline what resources and capabilities organizations need to develop in order to be truly behaviorally informed. In particular, they develop the notion of the cost of experimentation and make the point that driving down the cost of experimentation is key in developing behaviorally informed organizations. In chapter 3, Vinski asks and answers the question, “Why should organizations even want to be behaviorally informed?”; and in chapter 4, O’Malley and Peters add to that question by further addressing why organizations might actively resist the need to be behaviorally informed.
Organizational settings provide existing tools and also additional complexities, and in Part 2, “Overarching Insights and Tools,” four chapters address some of these organizational realities. Chapter 5 talks about “sludge” – small aspects of an organizationally created context that create impedance for end-users. If sludge is not cleared, the effectiveness of behavioral interventions will be constrained, and hence this chapter makes a case for identifying and eliminating sludge. In chapter 6, Duncan and colleagues provide a
guide to writing guidelines, an important tool for most policymakers and businesses as they attempt to provide helpful information to their citizens and customers. Given that organizations have multiple interactions for multiple products and services with their endusers, a binary classification into econs and humans is not feasible or helpful. Therefore, in chapter 7, Ireland talks about the boundedly rational complex consumer continuum, a nuanced framework for segmenting recipients of behavioral interventions. Given that endusers are inundated with information and other types of stimulus from organizations, it is unclear that they will attend to it all. In chapter 8, Hilchey and Taylor write about the psychology of attention and its implications for helping end-users make better decisions….(More)”.