EU Health data centre and a common data strategy for public health


Report by the European Parliament Think Tank: “Regarding health data, its availability and comparability, the Covid-19 pandemic revealed that the EU has no clear health data architecture. The lack of harmonisation in these practices and the absence of an EU-level centre for data analysis and use to support a better response to public health crises is the focus of this study. Through extensive desk review, interviews with key actors, and enquiry into experiences from outside the EU/EEA area, this study highlights that the EU must have the capacity to use data very effectively in order to make data-supported public health policy proposals and inform political decisions. The possible functions and characteristics of an EU health data centre are outlined. The centre can only fulfil its mandate if it has the power and competency to influence Member State public-health-relevant data ecosystems and institutionally link with their national level actors. The institutional structure, its possible activities and in particular its usage of advanced technologies such as AI are examined in detail….(More)”.

Government data management for the digital age


Essay by Axel Domeyer, Solveigh Hieronimus, Julia Klier, and Thomas Weber: “Digital society’s lifeblood is data—and governments have lots of data, representing a significant latent source of value for both the public and private sectors. If used effectively, and keeping in mind ever-increasing requirements with regard to data protection and data privacy, data can simplify delivery of public services, reduce fraud and human error, and catalyze massive operational efficiencies.

Despite these potential benefits, governments around the world remain largely unable to capture the opportunity. The key reason is that data are typically dispersed across a fragmented landscape of registers (datasets used by government entities for a specific purpose), which are often managed in organizational silos. Data are routinely stored in formats that are hard to process or in places where digital access is impossible. The consequence is that data are not available where needed, progress on digital government is inhibited, and citizens have little transparency on what data the government stores about them or how it is used.

Only a handful of countries have taken significant steps toward addressing these challenges. As other governments consider their options, the experiences of these countries may provide them with valuable guidance and also reveal five actions that can help governments unlock the value that is on their doorsteps.

As societies take steps to enhance data management, questions on topics such as data ownership, privacy concerns, and appropriate measures against security breaches will need to be answered by each government. The purpose of this article is to outline the positive benefits of modern data management and provide a perspective on how to get there…(More)”.

The Battle for Digital Privacy Is Reshaping the Internet


Brian X. Chen at The New York Times: “Apple introduced a pop-up window for iPhones in April that asks people for their permission to be tracked by different apps.

Google recently outlined plans to disable a tracking technology in its Chrome web browser.

And Facebook said last month that hundreds of its engineers were working on a new method of showing ads without relying on people’s personal data.

The developments may seem like technical tinkering, but they were connected to something bigger: an intensifying battle over the future of the internet. The struggle has entangled tech titans, upended Madison Avenue and disrupted small businesses. And it heralds a profound shift in how people’s personal information may be used online, with sweeping implications for the ways that businesses make money digitally.

At the center of the tussle is what has been the internet’s lifeblood: advertising.

More than 20 years ago, the internet drove an upheaval in the advertising industry. It eviscerated newspapers and magazines that had relied on selling classified and print ads, and threatened to dethrone television advertising as the prime way for marketers to reach large audiences….

If personal information is no longer the currency that people give for online content and services, something else must take its place. Media publishers, app makers and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models. Many are choosing to make people pay for what they get online by levying subscription fees and other charges instead of using their personal data.

Jeff Green, the chief executive of the Trade Desk, an ad-technology company in Ventura, Calif., that works with major ad agencies, said the behind-the-scenes fight was fundamental to the nature of the web…(More)”

Harms of AI


Paper by Daron Acemoglu: “This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy’s most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI’s promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment – to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient….(More)”.

UN urges moratorium on use of AI that imperils human rights


Jamey Keaten and Matt O’Brien at the Washington Post: “The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.

Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.

Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.

AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.

Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”

Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards….(More)” (Report).

Introducing collective crisis intelligence


Blogpost by Annemarie Poorterman et al: “…It has been estimated that over 600,000 Syrians have been killed since the start of the civil war, including tens of thousands of civilians killed in airstrike attacks. Predicting where and when strikes will occur and issuing time-critical warnings enabling civilians to seek safety is an ongoing challenge. It was this problem that motivated the development of Sentry Syria, an early warning system that alerts citizens to a possible airstrike. Sentry uses acoustic sensor data, reports from on-the-ground volunteers, and open media ‘scraping’ to detect warplanes in flight. It uses historical data and AI to validate the information from these different data sources and then issues warnings to civilians 5-10 minutes in advance of a strike via social media, TV, radio and sirens. These extra minutes can be the difference between life and death.

Sentry Syria is just one example of an emerging approach in the humanitarian response we call collective crisis intelligence (CCI). CCI methods combine the collective intelligence (CI) of local community actors (e.g. volunteer plane spotters in the case of Sentry) with a wide range of additional data sources, artificial intelligence (AI) and predictive analytics to support crisis management and reduce the devastating impacts of humanitarian emergencies….(More)”

A Vulnerable System: The History of Information Security in the Computer Age


Book by Andrew J. Stewart: As threats to the security of information pervade the fabric of everyday life, A Vulnerable System describes how, even as the demand for information security increases, the needs of society are not being met. The result is that the confidentiality of our personal data, the integrity of our elections, and the stability of foreign relations between countries are increasingly at risk.

Andrew J. Stewart convincingly shows that emergency software patches and new security products cannot provide the solution to threats such as computer hacking, viruses, software vulnerabilities, and electronic spying. Profound underlying structural problems must first be understood, confronted, and then addressed.

A Vulnerable System delivers a long view of the history of information security, beginning with the creation of the first digital computers during the Cold War. From the key institutions of the so-called military industrial complex in the 1950s to Silicon Valley start-ups in the 2020s, the relentless pursuit of new technologies has come at great cost. The absence of knowledge regarding the history of information security has caused the lessons of the past to be forsaken for the novelty of the present, and has led us to be collectively unable to meet the needs of the current day. From the very beginning of the information age, claims of secure systems have been crushed by practical reality.

The myriad risks to technology, Stewart reveals, cannot be addressed without first understanding how we arrived at this moment. A Vulnerable System is an enlightening and sobering history of a topic that affects crucial aspects of our lives….(More)”.

New report confirms positive momentum for EU open science


Press release: “The Commission released the results and datasets of a study monitoring the open access mandate in Horizon 2020. With a steadily increase over the years and an average success rate of 83% open access to scientific publications, the European Commission is at the forefront of research and innovation funders concluded the consortium formed by the analysis company PPMI (Lithuania), research and innovation centre Athena (Greece) and Maastricht University (the Netherlands).

The Commission sought advice on a process and reliable metrics through which to monitor all aspects of the open access requirements in Horizon 2020, and inform how to best do it for Horizon Europe – which has a more stringent and comprehensive set of rights and obligations for Open Science.

The key findings of the study indicate that the early European Commission’s leadership in the Open Science policy has paid off. The Excellent Science pillar in Horizon 2020 has led the success story, with an open access rate of 86%. Of the leaders within this pillar are the European Research Council (ERC) and the Future and Emerging Technologies (FET) programme, with open access rates of over 88%.

Other interesting facts:

  • In terms of article processing charges (APCs), the study estimated the average cost in Horizon 2020 of publishing an open access article to be around EUR 2,200.  APCs for articles published in ‘hybrid’ journals (a cost that will no longer be eligible under Horizon Europe), have a higher average cost of EUR 2,600
  • Compliance in terms of depositing open access publications in a repository (even when publishing open access through a journal) is relatively high (81.9%), indicating that the current policy of depositing is well understood and implemented by researchers.
  • Regarding licences, 49% of Horizon 2020 publications were published using Creative Commons (CC) licences, which permit reuse (with various levels of restrictions) while 33% use publisher-specific licences that place restrictions on text and data mining (TDM).
  • Institutional repositories have responded in a satisfactory manner to the challenge of providing FAIR access to their publications, amending internal processes and metadata to incorporate necessary changes: 95% of deposited publications include in their metadata some type of persistent identifier (PID).
  • Datasets in repositories present a low compliance level as only approximately 39% of Horizon 2020 deposited datasets are findable, (i.e., the metadata includes a PID and URL to the data file), and only around 32% of deposited datasets are accessible (i.e., the data file can be fetched using a URL link in the metadata).  Horizon Europe will hopefully allow to achieve better results.
  • The study also identified gaps in the existing Horizon 2020 open access monitoring data, which pose further difficulties in assessing compliance. Self-reporting by beneficiaries also highlighted a number of issues…(More)”

Using Satellite Imagery and Machine Learning to Estimate the Livelihood Impact of Electricity Access


Paper by Nathan Ratledge et al: “In many regions of the world, sparse data on key economic outcomes inhibits the development, targeting, and evaluation of public policy. We demonstrate how advancements in satellite imagery and machine learning can help ameliorate these data and inference challenges. In the context of an expansion of the electrical grid across Uganda, we show how a combination of satellite imagery and computer vision can be used to develop local-level livelihood measurements appropriate for inferring the causal impact of electricity access on livelihoods. We then show how ML-based inference techniques deliver more reliable estimates of the causal impact of electrification than traditional alternatives when applied to these data. We estimate that grid access improves village-level asset wealth in rural Uganda by 0.17 standard deviations, more than doubling the growth rate over our study period relative to untreated areas. Our results provide country-scale evidence on the impact of a key infrastructure investment, and provide a low-cost, generalizable approach to future policy evaluation in data sparse environments….(More)”.

Social welfare gains from innovation commons: Theory, evidence, and policy implications


Paper by Jason Potts, Andrew W. Torrance, Dietmar Harhoff and Eric A. von Hippel: “Innovation commons – which we define as repositories of freely-accessible, “open source” innovation-related information and data – are a very significant resource for innovating and innovation-adopting firms and individuals: Availability of free data and information reduces the innovation-specific private or open investment required to make the next innovative advance. Despite the clear social welfare value of innovation commons under many conditions, academic innovation research and innovation policymaking have to date focused almost entirely on enhancing private incentives to innovate by enabling innovators to keep some types of innovation-related information at least temporarily apart from the commons, via intellectual property rights.


In this paper, our focus is squarely on innovation commons theory, evidence, and policy implications. We first discuss the varying nature of and contents of innovation commons extant today. We summarize what is known about their functioning, their scale, the value they provide to innovators and to general social welfare, and the mechanisms by which this is accomplished. Perhaps somewhat counterintuitively, and with the important exception of major digital platform firms, we find that many who develop innovation-related information at private cost have private economic incentives to contribute their information to innovation commons for free access by free riders. We conclude with a discussion of the value of more general support for innovation commons, and how this could be provided by increased private and public investment in innovation commons “engineering”, and by specific forms of innovation policymaking to increase social welfare via enhancement of innovation commons….(More)”.