The Ethics and Laws of Medical Big Data


Chapter by Hrefna Gunnarsdottir et al: “The COVID-19 pandemic has highlighted that leveraging medical big data can help to better predict and control outbreaks from the outset. However, there are still challenges to overcome in the 21st century to efficiently use medical big data, promote innovation and public health activities and, at the same time, adequately protect individuals’ privacy. The metaphor that property is a “bundle of sticks”, each representing a different right, applies equally to medical big data. Understanding medical big data in this way raises a number of questions, including: Who has the right to make money off its buying and selling, or is it inalienable? When does medical big data become sufficiently stripped of identifiers that the rights of an individual concerning the data disappear? How have different regimes such as the General Data Protection Regulation in Europe and the Health Insurance Portability and Accountability Act in the US answered these questions differently? In this chapter, we will discuss three topics: (1) privacy and data sharing, (2) informed consent, and (3) ownership. We will identify and examine ethical and legal challenges and make suggestions on how to address them. In our discussion of each of the topics, we will also give examples related to the use of medical big data during the COVID-19 pandemic, though the issues we raise extend far beyond it….(More)”.

The Data Shake: Opportunities and Obstacles for Urban Policy Making


Book edited by Grazia Concilio, Paola Pucci, Lieven Raes and Geert Mareels: “This open access book represents one of the key milestones of PoliVisu, an H2020 research and innovation project funded by the European Commission under the call “Policy-development in the age of big data: data-driven policy-making, policy-modelling and policy-implementation”.

It investigates the operative and organizational implications related to the use of the growing amount of available data on policy making processes, highlighting the experimental dimension of policy making that, thanks to data, proves to be more and more exploitable towards more effective and sustainable decisions.

The first section of the book introduces the key questions highlighted by the PoliVisu project, which still represent operational and strategic challenges in the exploitation of data potentials in urban policy making. The second section explores how data and data visualisations can assume different roles in the different stages of a policy cycle and profoundly transform policy making….(More)”.

The Flip Side of Free: Understanding the Economics of the Internet


Book by Michael Kende: “The upside of the Internet is free Wi-Fi at Starbucks, Facetime over long distances, and nearly unlimited data for downloading or streaming. The downside is that our data goes to companies that use it to make money, our financial information is exposed to hackers, and the market power of technology companies continues to increase. In The Flip Side of Free, Michael Kende shows that free Internet comes at a price. We’re beginning to realize this. Our all-purpose techno-caveat is “I love my smart speaker,” but is it really tracking everything I do? listening to everything I say?

Kende explains the unique economics of the Internet and the paradoxes that result. The most valuable companies in the world are now Internet companies, built on data often exchanged for free content and services. Many users know the impact of this trade-off on privacy but continue to use the services anyway. Moreover, although the Internet lowers barriers for companies to enter markets, it is hard to compete with the largest providers. We complain about companies having too much data, but developing countries without widespread Internet usage may suffer from the reverse: not enough data collection for the development of advanced services—which leads to a worsening data divide between developed and developing countries.

What’s the future of free? Data is the price of free service, and the new currency of the Internet age. There’s nothing necessarily wrong with free, Kende says, as long as we anticipate and try to mitigate what’s on the flip side…(More)”.

The Third Wave of Open Data Toolkit


The GovLab: “Today, as part of Open Data Week 2021, the Open Data Policy Lab is launching  The Third Wave of Open Data Toolkit, which provides organizations with specific operational guidance on how to foster responsible, effective, and purpose-driven re-use. The toolkit—authored by Andrew Young, Andrew J. Zahuranec, Stefaan G. Verhulst, and Kateryna Gazaryan—supports the work of data stewards, responsible data leaders at public, private, and civil society organizations empowered to seek new ways to create public value through cross-sector data collaboration. The toolkit provides this support a few different ways. 

First, it offers a framework to make sense of the present and future open data ecosystem. Acknowledging that data re-use is the result of many stages, the toolkit separates each stage, identifying the ways the data lifecycle plays into data collaboration, the way data collaboration plays into the production of insights, the way insights play into conditions that enable further collaboration, and so on. By understanding the processes that data is created and used, data stewards can promote better and more impactful data management. 

Third Wave Framework

Second, the toolkit offers eight primers showing how data stewards can operationalize the actions previously identified as being part of the third wave. Each primer includes a brief explanation of what each action entails, offers some specific ways data stewards can implement these actions, and lists some supplementary pieces that might be useful in this work. The primers, which are available as part of the toolkit and as standalone two-pagers, are…(More)”.

Policy 2.0 in the Pandemic World: What Worked, What Didn’t, and Why


Blog by David Osimo: “…So how, then, did these new tools perform when confronted with the once-in-a-lifetime crisis of a vast global pandemic?

It turns out, some things worked. Others didn’t. And the question of how these new policymaking tools functioned in the heat of battle is already generating valuable ammunition for future crises.

So what worked?

Policy modelling – an analytical framework designed to anticipate the impact of decisions by simulating the interaction of multiple agents in a system rather than just the independent actions of atomised and rational humans – took centre stage in the pandemic and emerged with reinforced importance in policymaking. Notably, it helped governments predict how and when to introduce lockdowns or open up. But even there uptake was limited. A recent survey showed that of the 28 models used in different countries to fight the pandemic were traditional, and not the modern “agent-based models” or “system dynamics” supposed to deal best with uncertainty. Meanwhile, the concepts of system science was becoming prominent and widely communicated. It became quickly clear in the course of the crisis that social distancing was more a method to reduce the systemic pressure on the health services than a way to avoid individual contagion (the so called “flatten the curve” project).

Open government data has long promised to allow citizens and businesses to build new services at scale and make government accountable. The pandemic largely confirmed how important this data could be to allow citizens to analyse things independently. Hundreds of analysts from all walks of life and disciplines used social media to discuss their analysis and predictions, many becoming household names and go-to people in countries and regions. Yes, this led to noise and a so-called “infodemic,” but overall it served as a fundamental tool to increase confidence and consensus behind the policy measures and to make governments accountable for their actions. For instance, one Catalan analyst demonstrated that vaccines were not provided during weekends and forced the government to change its stance. Yet it is also clear that not all went well, most notably on the supply side. Governments published data of low quality, either in PDF, with delays or with missing data due to spreadsheet abuse.

In most cases, there was little demand for sophisticated data publishing solutions such as “linked” or “FAIR” data, although particularly significant was the uptake of these kinds of solutions when it came time to share crucial research data. Experts argue that the trend towards open science has accelerated dramatically and irreversibly in the last year, as shown by the portal https://www.covid19dataportal.org/ which allowed sharing of high quality data for scientific research….

But other new policy tools proved less easy to use and ultimately ineffective. Collaborative governance, for one, promised to leverage the knowledge of thousands of citizens to improve public policies and services. In practice, methodologies aiming at involving citizens in decision making and service design were of little use. Decisions related to lockdown and opening up were taken in closed committees in top down mode. Individual exceptions certainly exist: Milan, one of the cities worst hit by the pandemic, launched a co-created strategy for opening up after the lockdown, receiving almost 3000 contributions to the consultation. But overall, such initiatives had limited impact and visibility. With regard to co-design of public services, in times of emergency there was no time for prototyping or focus groups. Services such as emergency financial relief had to be launched in a hurry and “just work.”

Citizen science promised to make every citizen a consensual data source for monitoring complex phenomena in real time through apps and Internet-of-Things sensors. In the pandemic, there were initially great expectations on digital contact tracing apps to allow for real time monitoring of contagions, most notably through bluetooth connections in the phone. However, they were mostly a disappointment. Citizens were reluctant to install them. And contact tracing soon appeared to be much more complicated – and human intensive – than originally thought. The huge debate between technology and privacy was followed by very limited impact. Much ado about nothing.

Behavioural economics (commonly known as nudge theory) is probably the most visible failure of the pandemic. It promised to move beyond traditional carrots (public funding) and sticks (regulation) in delivering policy objectives by adopting an experimental method to influence or “nudge” human behaviour towards desired outcomes. The reality is that soft nudges proved an ineffective alternative to hard lockdown choices. What makes it uniquely negative is that such methods took centre stage in the initial phase of the pandemic and particularly informed the United Kingdom’s lax approach in the first months on the basis of a hypothetical and unproven “behavioural fatigue.” This attracted heavy criticism towards the excessive reliance on nudges by the United Kingdom government, a legacy of Prime Minister David Cameron’s administration. The origin of such criticisms seems to lie not in the method shortcomings per se, which enjoyed success previously on more specific cases, but in the backlash from excessive expectations and promises, epitomised in the quote of a prominent behavioural economist: “It’s no longer a matter of supposition as it was in 2010 […] we can now say with a high degree of confidence these models give you best policy.

Three factors emerge as the key determinants behind success and failure: maturity, institutions and leadership….(More)”.

2030 Compass CoLab


About: “2030 Compass CoLab invites a group of experts, using an online platform, to contribute their perspectives on potential interactions between the goals in the UN’s 2030 Agenda for Sustainable Development.

By combining the insight of participants who posses broad and diverse knowledge, we hope to develop a richer understanding of how the Sustainable Development Goals (SDGs) may be complementary or conflicting.

Compass 2030 CoLab is part of a larger project, The Agenda 2030 Compass Methodology and toolbox for strategic decision making, funded by Vinnova, Sweden’s government agency for innovation.

Other elements of the larger project include:

  • Deliberations by a panel of experts who will convene in a series of live meetings to undertake in-depth analysis on interactions between the goals. 
  • Quanitative analysis of SDG indicators time series data, which will examine historical correlations between progress on the SDGs.
  • Development of a knowledge repository, residing in a new software tool under development as part of the project. This tool will be made available as a resource to guide the decisions of corporate executives, policy makers, and leaders of NGOs.

The overall project was inspired by the work of researchers at the Stockholm Environment Institute, described in Towards systemic and contextual priority setting for implementing the 2030 Agenda, a 2018 paper in Sustainability Science by Nina Weitz, Henrik Carlsen, Måns Nilsson, and Kristian Skånberg….(More)”.

Radical Secrecy: The Ends of Transparency in Datafied America


Book by Clare Birchall: “When total data surveillance delimits agency and revelations of political wrongdoing fail to have consequences, is transparency the social panacea liberal democracies purport it to be? This book sets forth the provocative argument that progressive social goals would be better served by a radical form of secrecy, at least while state and corporate forces hold an asymmetrical advantage over the less powerful in data control. Clare Birchall asks: How might transparency actually serve agendas that are far from transparent? Can we imagine a secrecy that could act in the service of, rather than against, a progressive politics?

To move beyond atomizing calls for privacy and to interrupt the perennial tension between state security and the public’s right to know, Birchall adapts Édouard Glissant’s thinking to propose a digital “right to opacity.” As a crucial element of radical secrecy, she argues, this would eventually give rise to a “postsecret” society, offering an understanding and experience of the political that is free from the false choice between secrecy and transparency. She grounds her arresting story in case studies including the varied presidential styles of George W. Bush, Barack Obama, and Donald Trump; the Snowden revelations; conspiracy theories espoused or endorsed by Trump; WikiLeaks and guerrilla transparency; and the opening of the state through data portals.

Postsecrecy is the necessary condition for imagining, finally, an alternative vision of “the good,” of equality, as neither shaped by neoliberal incarnations of transparency nor undermined by secret state surveillance. Not least, postsecrecy reimagines collective resistance in the era of digital data….(More)”.

Intellectual Property and Artificial Intelligence


A literature review by the Joint Research Center: “Artificial intelligence has entered into the sphere of creativity and ingenuity. Recent headlines refer to paintings produced by machines, music performed or composed by algorithms or drugs discovered by computer programs. This paper discusses the possible implications of the development and adoption of this new technology in the intellectual property framework and presents the opinions expressed by practitioners and legal scholars in recent publications. The literature review, although not intended to be exhaustive, reveals a series of questions that call for further reflection. These concern the protection of artificial intelligence by intellectual property, the use of data to feed algorithms, the protection of the results generated by intelligent machines as well as the relationship between ethical requirements of transparency and explainability and the interests of rights holders….(More)”.

How Digital Trust Varies Around the World


Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi at Harvard Business Review: “As economies around the world digitalize rapidly in response to the pandemic, one component that can sometimes get left behind is user trust. What does it take to build out a digital ecosystem that users will feel comfortable actually using? To answer this question, the authors explored four components of digital trust: the security of an economy’s digital environment; the quality of the digital user experience; the extent to which users report trust in their digital environment; and the extent to which users actually use the digital tools available to them. They then used almost 200 indicators to rank 42 global economies on their performance in each of these four metrics, finding a number of interesting trends around how different economies have developed mechanisms for engendering trust, as well as how different types of trust do — or don’t — correspond to other digital development metrics…(More)”.

Far-right news sources on Facebook more engaging


Study by Laura Edelson, Minh-Kha Nguyen, Ian Goldstein, Oana Goga, Tobias Lauinger, and Damon McCoy: Facebook has become a major way people find news and information in an increasingly politically polarized nation. We analyzed how users interacted with different types of posts promoted as news in the lead-up to and aftermath of the U.S. 2020 elections. We found that politically extreme sources tend to generate more interactions from users. In particular, content from sources rated as far-right by independent news rating services consistently received the highest engagement per follower of any partisan group. Additionally, frequent purveyors of far-right misinformation had on average 65% more engagement per follower than other far-right pages. We found:

  • Sources of news and information rated as far-right generate the highest average number of interactions per follower with their posts, followed by sources from the far-left, and then news sources closer to the center of the political spectrum.
  • Looking at the far-right, misinformation sources far outperform non-misinformation sources. Far-right sources designated as spreaders of misinformation had an average of 426 interactions per thousand followers per week, while non-misinformation sources had an average of 259 weekly interactions per thousand followers.
  • Engagement with posts from far-right and far-left news sources peaked around Election Day and again on January 6, the day of the certification of the electoral count and the U.S. Capitol riot. For posts from all other political leanings of news sources, the increase in engagement was much less intense.
  • Center and left partisan categories incur a misinformation penalty, while right-leaning sources do not. Center sources of misinformation, for example, performed about 70% worse than their non-misinformation counterparts. (Note: center sources of misinformation tend to be sites presenting as health news that have no obvious ideological orientation.)…(More)”.