Resetting the state for the post-covid digital age


Blog by Carlos Santiso: “The COVID-19 crisis is putting our global digital resilience to the test. It has revealed the importance of a country’s digital infrastructure as the backbone of the economy, not just as an enabler of the tech economy. Digitally advanced governments, such as Estonia, have been able to put their entire bureaucracies in remote mode in a matter of days, without major disruption. And some early evidence even suggests that their productivity increased during lockdown.

With the crisis, the costs of not going digital have largely surpassed the risks of doing so. Countries and cities lagging behind have realised the necessity to boost their digital resilience and accelerate their digital transformation. Spain, for example, adopted an ambitious plan to inject 70 billion euro into in its digital transformation over the next five years, with a Digital Spain 2025 agenda comprising 10 priorities and 48 measures. In the case of Brazil, the country was already taking steps towards the digital transformation of its public sector before the COVID-19 crisis hit. The crisis is accelerating this transformation.

The great accelerator

Long before the crisis hit, the data-driven digital revolution has been challenging governments to modernise and become more agile, open and responsive. Progress has nevertheless been uneven, hindered by a variety of factors, from political resistance to budget constraints. Going digital requires the sort of whole-of government reforms that need political muscle and long-term vision to break-up traditional data silos within bureaucracies, jealous to preserve their power. In bureaucracies, information is power. Now, information has become ubiquitous and governing data, a critical challenge.

Cutting red tape will be central to the recovery. Many governments are fast-tracking regulatory simplification and administrative streamlining to reboot hard-hit economic sectors. Digitalisation is resetting the relationship between states and citizens, a Copernican revolution for our rule-based bureaucracies….(More)“.

Data for Policy: Junk-Food Diet or Technological Frontier?


Blog by Ed Humpherson at Data & Policy: “At the Office for Statistics Regulation, thinking about these questions is our day job. We set the standards for Government statistics and data through our Code of Practice for Statistics. And we review how Government departments are living up to these standards when they publish data and statistics. We routinely look at Government statistics are used in public debate.

Based on this, I would propose four factors that ensure that new data sources and tools serve the public good. They do so when:

  1. When data quality is properly tested and understood:

As my colleague Penny Babb wrote recently in a blog“‘Don’t trust the data. If you’ve found something interesting, something has probably gone wrong!”. People who work routinely with data develop a sort of innate scepticism, which Penny’s blog captures neatly. Understanding the limitations of both the data, and the inferences you make about the data, are the starting point for any appropriate role for data and policy. Accepting results and insights from new data at face value is a mistake. Much better to test the quality, explore the risks of mistakes, and only then to share findings and conclusions.

2. When the risks of misleadingness are considered:

At OSR, we have an approach to misleadingness that focuses on whether a misuse of data might lead a listener to a wrong conclusion. In fact, by “wrong” we don’t mean in some absolute sense of objective truth; more that if they received the data presented in a different and more faithful way, they would change their mind. Here’s a really simple example: someone might hear that, of two neighbouring countries, one has a much lower fatality rate, when comparing deaths to positive tests for Covid-19. …

3. When the data fill gaps

Data gaps come in several forms. One gap, highlighted by the interest in real-time economic indicators, is timing. Economic statistics don’t really tell us what’s going on right now. Figures like GDP, trade and inflation tells us about some point in the (admittedly quite) recent past. This is the attraction of the real-time economic indicators, which the Bank of England have drawn on in their decisions during the pandemic. They give policymakers a much more real-time feel by filling in this timing gap.

Other gaps are not about time but about coverage….

4. When the data are available

Perhaps the most important thing for data and policy is to democratise the notion of who the data are for. Data (and policy itself) are not just for decision-making elites. They are a tool to help people make sense of their world, what is going on in their community, helping frame and guide the choices they make.

For this reason, I often instinctively recoil at narratives of data that focus on the usefulness of data to decision-makers. Of course, we are all decision-makers of one kind or another, and data can help us all. But I always suspect that the “data for decision-makers” narrative harbours an assumption that decisions are made by senior, central, expert people, who make decisions on behalf of society; people who are, in the words of the musical Hamilton, in the room where it happens. It’s this implication that I find uncomfortable.

That’s why, during the pandemic, our work at the Office for Statistics Regulation has repeatedly argued that data should be made available. We have published a statement that any management information referred to by a decision maker should be published clearly and openly. We call this equality of access.

We fight for equality of access. We have secured the publication of lots of data — on positive Covid-19 cases in England’s Local Authorities, on Covid-19 in prisons, on antibody testing in Scotland…. and several others.

Data and policy are a powerful mix. They offer huge benefits to society in terms of defining, understanding and solving problems, and thereby in improving lives. We should be pleased that the coming together of data and policy is being sped-up by the pandemic.

But to secure these benefits, we need to focus on four things: quality, misleadingness, gaps, and public availability….(More)”

Tool for Surveillance or Spotlight on Inequality? Big Data and the Law


Paper by Rebecca A. Johnson and Tanina Rostain: “The rise of big data and machine learning is a polarizing force among those studying inequality and the law. Big data and tools like predictive modeling may amplify inequalities in the law, subjecting vulnerable individuals to enhanced surveillance. But these data and tools may also serve an opposite function, shining a spotlight on inequality and subjecting powerful institutions to enhanced oversight. We begin with a typology of the role of big data in inequality and the law. The typology asks questions—Which type of individual or institutional actor holds the data? What problem is the actor trying to use the data to solve?—that help situate the use of big data within existing scholarship on law and inequality. We then highlight the dual uses of big data and computational methods—data for surveillance and data as a spotlight—in three areas of law: rental housing, child welfare, and opioid prescribing. Our review highlights asymmetries where the lack of data infrastructure to measure basic facts about inequality within the law has impeded the spotlight function….(More)”.

The Principle of Self-Selection in Crowdsourcing Contests – Theory and Evidence


Paper by Nikolaus Franke, Kathrin Reinsberger and Philipp Topic: “Self-selection has been portrayed to be one of the core reasons for the stunning success of crowdsourcing. It is widely believed that among the mass of potential problem solvers particularly those individuals decide to participate who have the best problem-solving capabilities with regard to the problem at question. Extant research assumes that this self-selection effect is beneficial based on the premise that self-selecting individuals know more about their capabilities and knowledge than the publisher of the task – which frees the organization from costly and error-prone active search.

However, the effectiveness of this core principle has hardly been analyzed, probably because it is extremely difficult to investigate characteristics of those individuals who self-select out. In a unique research design in which we overcome these difficulties by combining behavioral data from a real crowdsourcing contest with data from a survey and archival data, we find that self-selection is actually working in the right direction. Those with particularly strong problem-solving capabilities tend to self-select into the contest and those with low capabilities tend to self-select out. However, this self-selection effect is much weaker than assumed and thus much potential is being lost. This suggests that much more attention needs to be paid to the early stages of crowdsourcing contests and particularly to those the hitherto almost completely overlooked individuals who could provide great solutions but self-select out.”…(More)”.

Public perceptions on data sharing: key insights from the UK and the USA


Paper by Saira Ghafur, Jackie Van Dael, Melanie Leis and Ara Darzi, and Aziz Sheikh: “Data science and artificial intelligence (AI) have the potential to transform the delivery of health care. Health care as a sector, with all of the longitudinal data it holds on patients across their lifetimes, is positioned to take advantage of what data science and AI have to offer. The current COVID-19 pandemic has shown the benefits of sharing data globally to permit a data-driven response through rapid data collection, analysis, modelling, and timely reporting.

Despite its obvious advantages, data sharing is a controversial subject, with researchers and members of the public justifiably concerned about how and why health data are shared. The most common concern is privacy; even when data are (pseudo-)anonymised, there remains a risk that a malicious hacker could, using only a few datapoints, re-identify individuals. For many, it is often unclear whether the risks of data sharing outweigh the benefits.

A series of surveys over recent years indicate that the public holds a range of views about data sharing. Over the past few years, there have been several important data breaches and cyberattacks. This has resulted in patients and the public questioning the safety of their data, including the prospect or risk of their health data being shared with unauthorised third parties.

We surveyed people across the UK and the USA to examine public attitude towards data sharing, data access, and the use of AI in health care. These two countries were chosen as comparators as both are high-income countries that have had substantial national investments in health information technology (IT) with established track records of using data to support health-care planning, delivery, and research. The UK and USA, however, have sharply contrasting models of health-care delivery, making it interesting to observe if these differences affect public attitudes.

Willingness to share anonymised personal health information varied across receiving bodies (figure). The more commercial the purpose of the receiving institution (eg, for an insurance or tech company), the less often respondents were willing to share their anonymised personal health information in both the UK and the USA. Older respondents (≥35 years) in both countries were generally less likely to trust any organisation with their anonymised personal health information than younger respondents (<35 years)…

Despite the benefits of big data and technology in health care, our findings suggest that the rapid development of novel technologies has been received with concern. Growing commodification of patient data has increased awareness of the risks involved in data sharing. There is a need for public standards that secure regulation and transparency of data use and sharing and support patient understanding of how data are used and for what purposes….(More)”.

The Shortcomings of Transparency for Democracy


Paper by Michael Schudson: “Transparency” has become a widely recognized, even taken for granted, value in contemporary democracies, but this has been true only since the 1970s. For all of the obvious virtues of transparency for democracy, they have not always been recognized or they have been recognized, as in the U.S. Freedom of Information Act of 1966, with significant qualifications. This essay catalogs important shortcomings of transparency for democracy, as when it clashes with national security, personal privacy, and the importance of maintaining the capacity of government officials to talk frankly with one another without fear that half-formulated ideas, thoughts, and proposals will become public. And when government information becomes public, that does not make it equally available to all—publicity is not in itself democratic, as public information (as in open legislative committee hearings) is more readily accessed by empowered groups with lobbyists able to attend and monitor the provision of the information. Transparency is an element in democratic government, but it is by no means a perfect emblem of democracy….(More)”.

The Open Innovation in Science research field: a collaborative conceptualisation approach


Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.

Morphing Intelligence From IQ Measurement to Artificial Brains


Book by Catherine Malabou: “What is intelligence? The concept crosses and blurs the boundaries between natural and artificial, bridging the human brain and the cybernetic world of AI. In this book, the acclaimed philosopher Catherine Malabou ventures a new approach that emphasizes the intertwined, networked relationships among the biological, the technological, and the symbolic.

Malabou traces the modern metamorphoses of intelligence, seeking to understand how neurobiological and neurotechnological advances have transformed our view. She considers three crucial developments: the notion of intelligence as an empirical, genetically based quality measurable by standardized tests; the shift to the epigenetic paradigm, with its emphasis on neural plasticity; and the dawn of artificial intelligence, with its potential to simulate, replicate, and ultimately surpass the workings of the brain. Malabou concludes that a dialogue between human and cybernetic intelligence offers the best if not the only means to build a democratic future. A strikingly original exploration of our changing notions of intelligence and the human and their far-reaching philosophical and political implications, Morphing Intelligence is an essential analysis of the porous border between symbolic and biological life at a time when once-clear distinctions between mind and machine have become uncertain….(More)”.

Calling Bullshit: The Art of Scepticism in a Data-Driven World


Book by Carl Bergstrom and Jevin West: “Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit — and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.

We’re sick of it. It’s time to do something, and as educators, one constructive thing we know how to do is to teach people. So, the aim of this course is to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combating it with effective analysis and argument.

What do we mean, exactly, by bullshit and calling bullshit? As a first approximation:

Bullshit involves language, statistical figures, data graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence.

Calling bullshit is a performative utterance, a speech act in which one publicly repudiates something objectionable. The scope of targets is broader than bullshit alone. You can call bullshit on bullshit, but you can also call bullshit on lies, treachery, trickery, or injustice.

In this course we will teach you how to spot the former and effectively perform the latter.

While bullshit may reach its apogee in the political domain, this is not a course on political bullshit. Instead, we will focus on bullshit that comes clad in the trappings of scholarly discourse. Traditionally, such highbrow nonsense has come couched in big words and fancy rhetoric, but more and more we see it presented instead in the guise of big data and fancy algorithms — and these quantitative, statistical, and computational forms of bullshit are those that we will be addressing in the present course.

Of course an advertisement is trying to sell you something, but do you know whether the TED talk you watched last night is also bullshit — and if so, can you explain why? Can you see the problem with the latest New York Times or Washington Post article fawning over some startup’s big data analytics? Can you tell when a clinical trial reported in the New England Journal or JAMA is trustworthy, and when it is just a veiled press release for some big pharma company?…(More)”.

20’s the limit: How to encourage speed reductions


Report by The Wales Centre for Public Policy: “This report has been prepared to support the Welsh Government’s plan to introduce a 20mph national default speed limit in 2022. It aims to address two main questions: 1) What specific behavioural interventions might be implemented to promote driver compliance with 20mph speed limits in residential areas; and 2) are there particular demographics, community characteristics or other features that should form the basis of a segmentation approach?

The reasons for speeding are complex, but many behaviour change
techniques have been successfully applied to road safety, including some which use behavioural insights or “nudges”.
Drivers can be segmented into three types: defiers (a small minority),
conformers (the majority) and champions (a minority). Conformers are law abiding citizens who respect social norms – getting this group to comply can achieve a tipping point.
Other sectors have shown that providing information is only effective if part of a wider package of measures and that people are most open to
change at times of disruption or learning (e.g. learner drivers)….(More)”.