Paper by Ben Goldacre and Seb Bacon: “Open data is information made freely available to third parties in structured formats without restrictive licensing conditions, permitting commercial and noncommercial organizations to innovate. In the context of National Health Service (NHS) data, this is intended to improve patient outcomes and efficiency. EBM DataLab is a research group with a focus on online tools which turn our research findings into actionable monthly outputs. We regularly import and process more than 15 different NHS open datasets to deliver OpenPrescribing.net, one of the most high-impact use cases for NHS England’s open data, with over 15,000 unique users each month. In this paper, we have described the many breaches of best practices around NHS open data that we have encountered. Examples include datasets that repeatedly change location without warning or forwarding; datasets that are needlessly behind a “CAPTCHA” and so cannot be automatically downloaded; longitudinal datasets that change their structure without warning or documentation; near-duplicate datasets with unexplained differences; datasets that are impossible to locate, and thus may or may not exist; poor or absent documentation; and withholding of data for dubious reasons. We propose new open ways of working that will support better analytics for all users of the NHS. These include better curation, better documentation, and systems for better dialogue with technical teams….(More)”.
Sandra Laville in The Guardian: “Ordinary people from across the UK – potentially including climate deniers – will take part in the first ever citizens’ climate assembly this weekend.
Mirroring the model adopted in France by Emmanuel Macron, 110 people from all walks of life will begin deliberations on Saturday to come up with a plan to tackle global heating and meet the government’s target of net-zero emissions by 2050.
The assembly was selected to be a representative sample of the population after a mailout to 30,000 people chosen at random. About 2,000 people responded saying they wanted to be considered for the assembly, and the 110 members were picked by computer.
They come from all age brackets and their selection reflects a 2019 Ipsos Mori poll of how concerned the general population is by climate change, where responses ranged from not at all to very concerned. Of the assembly members, three people are not at all concerned, 16 not very concerned, 36 fairly concerned, 54 very concerned, and one did not know, organisers said.
The selection process meant those chosen could include climate deniers or sceptics, according to Sarah Allan, the head of engagement at Involve, which is running the assembly along with the Sortition Foundation and the e-democracy project mySociety.
“It is really important that it is representative of the UK population,” said Allen. “Those people, just because they’re sceptical of climate change, they’re going to be affected by the steps the government takes to get to net zero by 2050 too and they shouldn’t have their voice denied in that.”
The UK climate assembly differs from the French model in that it was commissioned by six select committees, rather than by the prime minister. Their views, which will be produced in a report in the spring, will be considered by the select committees but there is no guarantee any of the proposals will be taken up by government.
Allen said it was rare for members of a citizens’ assembly to get locked into dissent. She pointed to the success of the Irish citizens’ assembly in 2016, which helped break the deadlock in the abortion debate. “This climate assembly is going to come up with recommendations that are going to be really invaluable in highlighting public preferences,” she said….(More)”.
Report by the Alliance for Useful Evidence: “This inventory is about how you can use experiments to solve public and social problems. It aims to provide a framework for thinking about the choices available to a government, funder or delivery organisation that wants to experiment more effectively. We aim to simplify jargon and do some myth-busting on common misperceptions.
There are other guides on specific areas of experimentation – such as on randomised controlled trials – including many specialist technical textbooks. This is not a technical manual or guide about how to run experiments. Rather, this inventory is useful for anybody wanting a jargon-free overview of the types and uses of experiments. It is unique in its breadth – covering the whole landscape of social and policy experimentation, including prototyping, rapid cycle testing, quasi-experimental designs, and a range of different types of randomised trials. Experimentation can be a confusing landscape – and there are competing definitions about what constitutes an experiment among researchers, innovators and evaluation practitioners. We take a pragmatic approach, including different designs that are useful for public problem-solving, under our experimental umbrella. We cover ways of experimenting that are both qualitative and quantitative, and highlight what we can learn from different approaches….(More)”.
Article by Min Reuchamps: In December 2019, the parliament of the Region of Brussels in Belgium amended its internal regulations to allow the formation of ‘deliberative committees’ composed of a mixture of members of the Regional Parliament and randomly selected citizens. This initiative follows innovative experiences in the German-speaking Community of Belgium, known as Ostbelgien, and the city of Madrid in establishing permanent forums of deliberative democracy earlier in 2019. Ostbelgien is now experiencing its first cycle of deliberations, whereas the Madrid forum has been short-lived after having been cancelled, after two meetings, by the new governing coalition of the city.
The experimentation in establishing permanent forums for direct citizen involvement constitutes an advance from hitherto deliberative processes which were one-off experiments, i.e. non-permanent procedures. The relatively large size of the Brussels Region, with over 1 200 000 inhabitants, means that the lessons will be key in understanding the opportunities and risks of ‘deliberative committees’ and their potential scalability….
Under the new rules, the Regional Parliament can setup a parliamentary committee composed of 15 (12 in the Cocof) parliamentarians and 45 (36 in the Cocof) citizens to draft recommendations on a given issue. Any inhabitant in Brussels who has attained 16 years of age has the chance to have a direct say in matters falling under the jurisdiction of the Brussels Regional Parliament and the Cocof. The citizen representatives will be drawn by lot in two steps:
- A first draw among the whole population, so that every inhabitant has the same chance to be invited via a formal invitation letter from the Parliament;
- A second draw among all the persons who have responded positively to the invitation by means of a sampling method following criteria to ensure a diverse and representative selection, at least in terms of gender, age, official languages of the Brussels-Capital Region, geographical distribution and level of education.
The participating parliamentarians will be the members of the standing parliamentary committee that covers the topic under deliberation. In the regional parliament, each standing committee is made up of 15 members (including both Dutch- and French-speakers), and in the Cocof Parliament, each standing committee is made of 12 members (only French-speakers)….(More)”.
Denis Campbell at the Guardian: “Social media firms such as Facebook and Instagram should be forced to hand over data about who their users are and why they use the sites to reduce suicide among children and young people, psychiatrists have said.
The call from the Royal College of Psychiatrists comes as ministers finalise plans to crack down on issues caused by people viewing unsavoury material and messages online.
The college, which represents the UK’s 18,000 psychiatrists, wants the government to make social media platforms hand over the data to academics so that they can study what sort of content users are viewing.
“We will never understand the risks and benefits of social media use unless the likes of Twitter, Facebook and Instagram share their data with researchers,” said Dr Bernadka Dubicka, chair of the college’s child and adolescent mental health faculty. “Their research will help shine a light on how young people are interacting with social media, not just how much time they spend online.”
Data passed to academics would show the type of material viewed and how long users were spending on such platforms but would be anonymous, the college said.
The government plans to set up a new online safety regulator and the college says it should be given the power to compel firms to hand over data. It is also calling for the forthcoming 2% “turnover tax” on social media companies’ income to be extended so that it includes their turnover internationally, not from just the UK.
“Self-regulation is not working. It is time for government to step up and take decisive action to hold social media companies to account for escalating harmful content to vulnerable children and young people,” said Dubicka.
The college’s demands come amid growing concern that young people are being harmed by material that, for example, encourages self-harm, suicide and eating disorders. They are included in a new position statement on technology use and the mental health of children and young people.
NHS England challenged firms to hand over the sort of information that the college is suggesting. Claire Murdoch, its national director for mental health, said that action was needed “to rein in potentially misleading or harmful online content and behaviours”.
She said: “If these tech giants really want to be a force for good, put a premium on users’ wellbeing and take their responsibilities seriously, then they should do all they can to help researchers better understand how they operate and the risks posed. Until then, they cannot confidently say whether the good outweighs the bad.”
The demands have also been backed by Ian Russell, who has become a campaigner against social media harm since his 14-year-old daughter Molly killed herself in November 2017….(More)”.
Hetan Shah at Nature: “Without human insights, data and the hard sciences will not meet the challenges of the next decade…
I worry about the fact that the call prioritized science and technology over the humanities and social sciences. Governments must make sure they also tap into that expertise, or they will fail to tackle the challenges of this decade.
For example, we cannot improve global health if we take only a narrow medical view. Epidemics are social as well as biological phenomena. Anthropologists such as Melissa Leach at the University of Sussex in Brighton, UK, played an important part in curbing the West African Ebola epidemic with proposals to substitute risky burial rituals with safer ones, rather than trying to eliminate such rituals altogether.
Treatments for mental health have made insufficient progress. Advances will depend, in part, on a better understanding of how social context influences whether treatment succeeds. Similar arguments apply to the problem of antimicrobial resistance and antibiotic overuse.
Environmental issues are not just technical challenges that can be solved with a new invention. To tackle climate change we will need insight from psychology and sociology. Scientific and technological innovations are necessary, but enabling them to make an impact requires an understanding of how people adapt and change their behaviour. That will probably require new narratives — the purview of rhetoric, literature, philosophy and even theology.
Poverty and inequality call even more obviously for expertise beyond science and maths. The UK Economic and Social Research Council has recognized that poor productivity in the country is a big problem, and is investing up to £32.4 million (US$42 million) in a new Productivity Institute in an effort understand the causes and potential remedies.
Policy that touches on national and geographical identity also needs scholarly input. What is the rise of ‘Englishness’? How do we live together in a community of diverse races and religions? How is migration understood and experienced? These intangibles have real-world consequences, as demonstrated by the Brexit vote and ongoing discussions about whether the United Kingdom has a future as a united kingdom. It will take the work of historians, social psychologists and political scientists to help shed light on these questions. I could go on: fighting against misinformation; devising ethical frameworks for artificial intelligence. These are issues that cannot be tackled with better science alone….(More)”.
Policy Lab (UK): “….Compared with quantitative data, ethnography creates different forms of data – what anthropologists call ‘thick data’. Complex social problems benefit from insights beyond linear, standardised evidence and this is where thick data shows its worth. In Policy Lab we have generated ethnographic films and analysis to sit alongside quantitative data, helping policy-makers to build a rich picture of current circumstances.
On the other hand, much has been written about big data – data generated through digital interactions – whether it be traditional ledgers and spreadsheets or emerging use of artificial intelligence and the internet of things. The ever-growing zettabytes of data can reveal a lot, providing a (sometimes real time) digital trail capturing and aggregating our individual choices, preferences, behaviours and actions.
Much hyped, this quantitative data has great potential to inform future policy, but must be handled ethically, and also requires careful preparation and analysis to avoid biases and false assumptions creeping in. Three issues we have seen in our projects relate to:
- partial data, for example not having data on people who are not digitally active, biasing the sample
- the time-consuming challenge of cleaning up data, in a political context where time is often of the essence
- the lack of data interoperability, where different localities/organisations capture different metrics
Through a number of Policy Lab projects we have used big data to see the big picture before then using thick data to zoom in to the detail of people’s lived experience. Whereas big data can give us cumulative evidence at a macro, often systemic level, thick data provides insights at an individual or group level. We have found the blending of ‘big data’ and ‘thick data’ – to be the sweet spot.
Policy Lab’s work develops data and insights into ideas for potential policy intervention which we can start to test as prototypes with real people. These operate at the ‘meso’ level (in the middle of the diagram above), informed by both the thick data from individual experiences and the big data at a population or national level. We have written a lot about prototyping for policy and are continuing to explore how you prototype a policy compared to say a digital service….(More)”.
Essay by Richard Bellamy: “This essay explores how far democracy is compatible with lies and deception, and whether it encourages or discourages their use by politicians. Neo-Kantian arguments, such as Newey’s, that lies and deception undermine individual autonomy and the possibility for consent go too far, given that no democratic process can be regarded as a plausible mechanism for achieving collective consent to state policies. However, they can be regarded as incompatible with a more modest account of democracy as a system of public equality among political equals.
On this view, the problem with lies and deception derives from their being instruments of manipulation and domination. Both can be distinguished from ‘spin’, with a working democracy being capable of uncovering them and so incentivising politicians to be truthful. Nevertheless, while lies and deception will find you out, bullshit and post truth disregard and subvert truth respectively, and as such prove more pernicious as they admit of no standard whereby they might be challenged….(More)”.
Paper by Nikita Aggarwal et al: “Recent advances in machine learning (ML) and Big Data techniques have facilitated the development of more sophisticated, automated consumer credit scoring models — a trend referred to as ‘algorithmic credit scoring’ in recognition of the increasing reliance on computer (particularly ML) algorithms for credit scoring. This chapter, which forms part of the 2018 collection of short essays ‘Autonomous Systems and the Law’, examines the rise of algorithmic credit scoring, and considers its implications for the regulation of consumer creditworthiness assessment and consumer credit markets more broadly.
The chapter argues that algorithmic credit scoring, and the Big Data and ML technologies underlying it, offer both benefits and risks for consumer credit markets. On the one hand, it could increase allocative efficiency and distributional fairness in these markets, by widening access to, and lowering the cost of, credit, particularly for ‘thin-file’ and ‘no-file’ consumers. On the other hand, algorithmic credit scoring could undermine distributional fairness and efficiency, by perpetuating discrimination in lending against certain groups and by enabling the more effective exploitation of borrowers.
The chapter considers how consumer financial regulation should respond to these risks, focusing on the UK/EU regulatory framework. As a general matter, it argues that the broadly principles and conduct-based approach of UK consumer credit regulation provides the flexibility necessary for regulators and market participants to respond dynamically to these risks. However, this approach could be enhanced through the introduction of more robust product oversight and governance requirements for firms in relation to their use of ML systems and processes. Supervisory authorities could also themselves make greater use of ML and Big Data techniques in order to strengthen the supervision of consumer credit firms.
Finally, the chapter notes that cross-sectoral data protection regulation, recently updated in the EU under the GDPR, offers an important avenue to mitigate risks to consumers arising from the use of their personal data. However, further guidance is needed on the application and scope of this regime in the consumer financial context….(More)”.
Paper by Charlotte Ducuing: “The article discusses the concept of infrastructure in the digital environment, through a study of three data sharing legal regimes: the Public Sector Information Directive (PSI Directive), the discussions on in-vehicle data governance and the freshly adopted data sharing legal regime in the Electricity Directive.
While aiming to contribute to the scholarship on data governance, the article deliberately focuses on network industries. Characterised by the existence of physical infrastructure, they have a special relationship to digitisation and ‘platformisation’ and are exposed to specific risks. Adopting an explanatory methodology, the article exposes that these regimes are based on two close but different sources of inspiration, yet intertwined and left unclear. By targeting entities deemed ‘monopolist’ with regard to the data they create and hold, data sharing obligations are inspired from competition law and especially the essential facility doctrine. On the other hand, beneficiaries appear to include both operators in related markets needing data to conduct their business (except for the PSI Directive), and third parties at large to foster innovation. The latter rationale illustrates what is called here a purposive view of data as infrastructure. The underlying understanding of ‘raw’ data (management) as infrastructure for all to use may run counter the ability for the regulated entities to get a fair remuneration for ‘their’ data.
Finally, the article pleads for more granularity when mandating data sharing obligations depending upon the purpose. Shifting away from a ‘one-size-fits-all’ solution, the regulation of data could also extend to the ensuing context-specific data governance regime, subject to further research…(More)”.