Feedback Loops in Open Data Ecosystems


Paper by Daniel Rudmark and Magnus Andersson: “Public agencies are increasingly publishing open data to increase transparency and fuel data-driven innovation. For these organizations, maintaining sufficient data quality is key to continuous re-use but also heavily dependent on feedback loops being initiated between data publishers and users. This paper reports from a longitudinal engagement with Scandinavian transportation agencies, where such feedback loops have been successfully established. Based on these experiences, we propose four distinct types of data feedback loops in which both data publishers and re-users play critical roles…(More)”.

Putting data at the heart of policymaking will accelerate London’s recovery


Mel Hobson at Computer Weekly: “…London’s mayor, Sadiq Khan, knows how important this is. His re-election manifesto committed to rebuilding the London Datastore, currently home to over 700 freely available datasets, as the central register linking data across our city. That in turn will help analysts, researchers and policymakers understand our city and develop new ideas and solutions.

To help take the next step and create a data ecosystem that can improve millions of Londoners lives, businesses across our capital are committing their expertise and insights.

At London First, we have launched the London Data Charter, expertly put together by Pinsent Masons, and setting out the guiding principles for private and public sector data collaborations, which are key to creating this ecosystem. These include a focus on protecting privacy and security of data, promoting trust and sharing learnings with others – creating scalable solutions to meet the capital’s challenges….(More)”.

Secondary use of health data in Europe


Report by Mark Boyd, Dr Milly Zimeta, Dr Jeni Tennison and Mahad Alassow: “Open and trusted health data systems can help Europe respond to the many urgent challenges facing its society and economy today. The global pandemic has already altered many of our societal and economic systems, and data has played a key role in enabling cross-border and cross-sector collaboration in public health responses.

Even before the pandemic, there was an urgent need to optimise healthcare systems and manage limited resources more effectively, to meet the needs of growing, and often ageing, populations. Now, there is a heightened need to develop early-diagnostic and health-surveillance systems, and more willingness to adopt digital healthcare solutions…

By reusing health data in different ways, we can increase the value of this data and help to enable these improvements. Clinical data, such as incidences of healthcare and clinical trials data, can be combined with data collected from other sources, such as sickness and insurance claims records, and from devices and wearable technologies. This data can then be anonymised and aggregated to generate new insights and optimise population health, improve patients’ health and experiences, create more efficient healthcare systems, and foster innovation.

This secondary use of health data can enable a wide range of benefits across the entire healthcare system. These include opportunities to optimise service, reduce health inequalities by better allocating resources, and enhance personalised healthcare –for example, by comparing treatments for people with similar characteristics. It can also help encourage innovation by extending research data to assess whether new therapies would work for a broader population….(More)”.

Greece used AI to curb COVID: what other nations can learn


Editorial at Nature: “A few months into the COVID-19 pandemic, operations researcher Kimon Drakopoulos e-mailed both the Greek prime minister and the head of the country’s COVID-19 scientific task force to ask if they needed any extra advice.

Drakopoulos works in data science at the University of Southern California in Los Angeles, and is originally from Greece. To his surprise, he received a reply from Prime Minister Kyriakos Mitsotakis within hours. The European Union was asking member states, many of which had implemented widespread lockdowns in March, to allow non-essential travel to recommence from July 2020, and the Greek government needed help in deciding when and how to reopen borders.

Greece, like many other countries, lacked the capacity to test all travellers, particularly those not displaying symptoms. One option was to test a sample of visitors, but Greece opted to trial an approach rooted in artificial intelligence (AI).

Between August and November 2020 — with input from Drakopoulos and his colleagues — the authorities launched a system that uses a machine-learning algorithm to determine which travellers entering the country should be tested for COVID-19. The authors found machine learning to be more effective at identifying asymptomatic people than was random testing or testing based on a traveller’s country of origin. According to the researchers’ analysis, during the peak tourist season, the system detected two to four times more infected travellers than did random testing.

The machine-learning system, which is among the first of its kind, is called Eva and is described in Nature this week (H. Bastani et al. Nature https://doi.org/10.1038/s41586-021-04014-z; 2021). It’s an example of how data analysis can contribute to effective COVID-19 policies. But it also presents challenges, from ensuring that individuals’ privacy is protected to the need to independently verify its accuracy. Moreover, Eva is a reminder of why proposals for a pandemic treaty (see Nature 594, 8; 2021) must consider rules and protocols on the proper use of AI and big data. These need to be drawn up in advance so that such analyses can be used quickly and safely in an emergency.

In many countries, travellers are chosen for COVID-19 testing at random or according to risk categories. For example, a person coming from a region with a high rate of infections might be prioritized for testing over someone travelling from a region with a lower rate.

By contrast, Eva collected not only travel history, but also demographic data such as age and sex from the passenger information forms required for entry to Greece. It then matched those characteristics with data from previously tested passengers and used the results to estimate an individual’s risk of infection. COVID-19 tests were targeted to travellers calculated to be at highest risk. The algorithm also issued tests to allow it to fill data gaps, ensuring that it remained up to date as the situation unfolded.

During the pandemic, there has been no shortage of ideas on how to deploy big data and AI to improve public health or assess the pandemic’s economic impact. However, relatively few of these ideas have made it into practice. This is partly because companies and governments that hold relevant data — such as mobile-phone records or details of financial transactions — need agreed systems to be in place before they can share the data with researchers. It’s also not clear how consent can be obtained to use such personal data, or how to ensure that these data are stored safely and securely…(More)”.

Who takes part in Citizen Science projects & why?


CS Track: “Citizen Science in Europe, as elsewhere, continues to manifest itself in a variety of different ways. While attracting interest across multiple sectors of society, its definition remains unclear. The first CS Track White Paper on Themes, objectives and participants of citizen science activities has just been published and, along with the initial results of the first large scale survey into participation in citizen science, provides an important overview of who participates in citizen science projects and what motivates them. This short report focuses on one aspect that emerges in this white paper.

Citizen Science Participants – who are they? 

Participants and who they are, have a significant impact on the objectives and outcomes of citizen science projects. However, existing information on the demographics of participants in citizen science projects is very limited and most studies have focused on a single project or programme. Furthermore, certain groups, like young people, are underrepresented in the available data.

What our research team has gathered from the literature and the initial results of the CS Track large-scale survey is the following:

  • Well-educated, affluent participants outnumber less affluent participants,
  • More men than women take part in many of the programmes that have been analysed.
  • Citizen scientists seem to be whitemiddle-agedscientifically literate or generally interested in science or scientific topics.
  • Scientistsacademicsteachersscience students and people who have a passion for the outdoors are among the groups of people most likely to take part in citizen science.
  • In agricultural, biological and environmental science-based programmes, participants are often scientists themselves, science teachers or students, conservation group members, backpackers or hikers or other outdoor enthusiasts – in other words people who care about nature.
  • Community and youth citizen science projects are underrepresented in the available data….(More)“.

Less complex language, more participation: how consultation documents shape participatory patterns


Paper by Simon Fink, Eva Ruffing, Tobias Burst & Sara Katharina Chinnow: “Consultations are thought to increase the legitimacy of policies. However, this reasoning only holds if stakeholders really participate in the consultations. Current scholarship offers three explanations for participation patterns: Institutional rules, policy characteristics, and interest group resources determine participation. This article argues that additionally the linguistic complexity of consultation documents influences participation. Complex language deters potential participants, because it raises the costs of participation. A quantitative analysis of the German consultation of electricity grids lends credibility to the argument: If the description of a power line is simplified between two consultation rounds, the number of contributions mentioning that power line increases. This result contributes to our understanding of unequal participation patterns, and the institutional design of participatory procedures. If we think that legitimacy is enhanced by broad participation, then language of the documents matters….(More)”.

The search engine of 1896


The Generalist Academy: In 1896 Paul Otlet set up a bibliographic query service by mail: a 19th century search engine….The end of the 19th century was awash with the written word: books, monographs, and publications of all kinds. It was fiendishly difficult to find what you wanted in that mess. Bibliographies – compilations of references on a specific subject – were the maps to this vast informational territory. But they were expensive and time-consuming to compile.

Paul Otlet had a passion for information. More precisely, he had a passion for organising information. He and Henri La Fontaine made bibliographies on many subjects – and then turned their efforts towards creating something better. A master bibliography. A bibliography to rule them all, nothing less than a complete record of everything that had ever been published on every topic. This was their plan: the grandly named Universal Bibliographic Repertory.

This ambitious endeavour listed sources for every topic that its creators could imagine. The references were meticulously recorded on index cards that were filed in a massive series of drawers like the ones pictured above. The whole thing was arranged according to their Universal Decimal Classification, and it was enormous. In 1895 there were four hundred thousand entries. At its peak in 1934, there were nearly sixteen million.

How could you access such a mega-bibliography? Well, Otlet and La Fontaine set up a mail service. People set in queries and received a summary of publications relating to that topic. Curious about the native religions of Sumatra? Want to explore the 19th century decipherment of Akkadian cuneiform? Send a request to the Universal Bibliographic Repertory, get a tidy list of the references you need. It was nothing less than a manual search engine, one hundred and twenty-five years ago.

Encyclopedia Universalis
Paul Otlet, Public domain, via Wikimedia Commons

Otlet had many more ambitions: a world encyclopaedia of knowledge, contraptions to easily access every publication in the world (he was an early microfiche pioneer), and a whole city to serve as the bright centre of global intellect. These ambitions were mostly unrealised, due to lack of funds and the intervention of war. But today Otlet is recognised as an important figure in the history of information science…(More)”.

Are citizen juries and assemblies on climate change driving democratic climate policymaking? An exploration of two case studies in the UK


Paper by Rebecca Wells, Candice Howarth & Lina I. Brand-Correa: “In light of increasing pressure to deliver climate action targets and the growing role of citizens in raising the importance of the issue, deliberative democratic processes (e.g. citizen juries and citizen assemblies) on climate change are increasingly being used to provide a voice to citizens in climate change decision-making. Through a comparative case study of two processes that ran in the UK in 2019 (the Leeds Climate Change Citizens’ Jury and the Oxford Citizens’ Assembly on Climate Change), this paper investigates how far citizen assemblies and juries are increasing citizen engagement on climate change and creating more citizen-centred climate policymaking. Interviews were conducted with policymakers, councillors, professional facilitators and others involved in running these processes to assess motivations for conducting these, their structure and the impact and influence they had. The findings suggest the impact of these processes is not uniform: they have an indirect impact on policy making by creating momentum around climate action and supporting the introduction of pre-planned or pre-existing policies rather than a direct impact by truly being citizen-centred policy making processes or conducive to new climate policy. We conclude with reflections on how these processes give elected representatives a public mandate on climate change, that they help to identify more nuanced and in-depth public opinions in a fair and informed way, yet it can be challenging to embed citizen juries and assemblies in wider democratic processes….(More)”.

Expertise, ‘Publics’ and the Construction of Government Policy


Introduction to Special Issue of Discover Society about the role of expertise and professional knowledge in democracy by John Holmwood: “In the UK, the vexed nature of the issue was, perhaps, best illustrated by (then Justice Secretary) Michael Gove’s comment during the Brexit campaign that he thought, “the people of this country have had enough of experts.” The comment is oft cited, and derided, especially in the context of the Covid-19 pandemic, where the public has, or so it is argued, found a new respect for a science that can guide public policy and deliver solutions.

Yet, Michael Gove’s point was more nuanced than is usually credited. It wasn’t scientific advice that he claimed people were fed up with, but “experts with organisations with acronyms saying that they know what is best and getting it consistently wrong.” In other words, his complaint was about specific organised advocacy groups and their intervention in public debate and reporting in the media.

… the Government has consistently mobilised the claimed expert opinion of organisations in justification of their policies

Michael Gove’s extended comment was disingenuous. After all, the Brexit campaign, no less than the Remain campaign, drew upon arguments from think tanks and lobby groups. Moreover, since the referendum, the Government has consistently mobilised the claimed expert opinion of organisations in justification of their policies. Indeed, as Layla Aitlhadj and John Holmwood in this special issue argue, they have deliberately ‘managed’ civil society groups and supposedly independent reviews, such as that currently underway into the Prevent counter extremism policy.

In fact, there is nothing straightforward about the relationship between expertise and democracy as Stephen Turner (2003) has observed. The development of liberal democracy involves the rise of professional and expert knowledge which underpins the everyday governance of public institutions. At the same time, wider publics are asked to trust that knowledge even where it impinges directly upon their preferences; they are not in a position to evaluate it, except through the mediation of other experts. Elected politicians and governments, in turn, are dependent on expert knowledge to guide their policy choices, which are duly constrained by what is possible on the basis of technical judgements….(More)”

EU Health data centre and a common data strategy for public health


Report by the European Parliament Think Tank: “Regarding health data, its availability and comparability, the Covid-19 pandemic revealed that the EU has no clear health data architecture. The lack of harmonisation in these practices and the absence of an EU-level centre for data analysis and use to support a better response to public health crises is the focus of this study. Through extensive desk review, interviews with key actors, and enquiry into experiences from outside the EU/EEA area, this study highlights that the EU must have the capacity to use data very effectively in order to make data-supported public health policy proposals and inform political decisions. The possible functions and characteristics of an EU health data centre are outlined. The centre can only fulfil its mandate if it has the power and competency to influence Member State public-health-relevant data ecosystems and institutionally link with their national level actors. The institutional structure, its possible activities and in particular its usage of advanced technologies such as AI are examined in detail….(More)”.