Towards adaptive governance in big data health research: implementing regulatory principles


Chapter by Alessandro Blasimme and Effy Vayena: “While data-enabled health care systems are in their infancy, biomedical research is rapidly adopting the big data paradigm. Digital epidemiology for example, already employs data generated outside the public health care system – that is, data generated without the intent of using them for epidemiological research – to understand and prevent patterns of diseases in populations (Salathé 2018)(Salathé 2018). Precision medicine – pooling together genomic, environmental and lifestyle data – also represents a prominent example of how data integration can drive both fundamental and translational research in important medical domains such as oncology (D. C. Collins et al. 2017). All of this requires the collection, storage, analysis and distribution of massive amounts of personal information as well as the use of state-of-the art data analytics tools to uncover healthand disease related patterns.


The realization of the potential of big data in health evokes a necessary commitment to a sense of “continuity” articulated in three distinct ways: a) from data generation to use (as in the data enabled learning health care ); b) from research to clinical practice e.g. discovery of new mutations in the context of diagnostics; c) from strictly speaking health data (Vayena and Gasser 2016) e.g. clinical records, to less so e.g. tweets used in digital epidemiology. These continuities face the challenge of regulatory and governance approaches that were designed for clear data taxonomies, for a less blurred boundary between research and clinical practice, and for rules that focused mostly on data generation and less on their eventual and multiple uses.

The result is significant uncertainty about how responsible use of such large amounts of sensitive personal data could be fostered. In this chapter we focus on the uncertainties surrounding the use of biomedical big data in the context of health research. Are new criteria needed to review biomedical big data research projects? Do current mechanisms, such as informed consent, offer sufficient protection to research participants’ autonomy and privacy in this new context? Do existing oversight mechanisms ensure transparency and accountability in data access and sharing? What monitoring tools are available to assess how personal data are used over time? Is the equitable distribution of benefits accruing from such data uses considered, or can it be ensured? How is the public being involved – if at all – with decisions about creating and using large data
repositories for research purposes? What is the role that IT (information technology) players, and especially big ones, acquire in research? And what regulatory instruments do we have to ensure that such players do not undermine the independence of research?…(More)”.

Algorithmic Regulation and (Im)perfect Enforcement in the Personalized Economy


Chapter by Christoph Busch in “Data Economy and Algorithmic Regulation: A Handbook on Personalized Law”, C.H.Beck Nomos Hart, 2020: “Technological advances in data collection and information processing makes it possible to tailor legal norms to specific individuals and achieve an unprecedented degree of regulatory precision. However, the benefits of such a “personalized law” must not be confounded with the false promise of “perfect enforcement”. To the contrary, the enforcement of personalized law might be even more challenging and complex than the enforcement of impersonal and uniform rules. Starting from this premise, the first part of this Essay explores how algorithmic personalization of legal rules could be operationalized for tailoring disclosures on digital marketplaces, mitigating discrimination in the sharing economy and optimizing the flow of traffic in smart cities. The second part of the Essay looks into an aspect of personalized law that has so far been rather under-researched: a transition towards personalized law involves not only changes in the design of legal rules, but also necessitates modifications regarding compliance monitoring and enforcement. It is argued that personalized law can be conceptualized as a form of algorithmic regulation or governance-by-data. Therefore, the implementation of personalized law requires setting up a regulatory framework for ensuring algorithmic accountability. In a broader perspective, this Essay aims to create a link between the scholarly debate on algorithmic decision-making and automated legal enforcement and the emerging debate on personalized law….(More)”.

Philosophy Is a Public Service


Jonathon Keats at Nautilus: “…One of my primary techniques, adapted from philosophy, is to undertake large-scale thought experiments. In these experiments, I create alternative realities that provide perspectives on our own society, and provoke dialogue about who and what we want to become. Another of my techniques is to create philosophical instruments: tools and devices with which people can collectively investigate the places they inhabit.

The former technique is exemplified by Centuries of the Bristlecone, and other environmentally-calibrated clocks I’m developing in other cities, such as a timepiece modulated by the flow of rivers in Alaska, currently in planning at the Anchorage Museum.

The latter is exemplified by a project I initiated in Berlin in 2014, which I’ve now instigated in cities around the world. It’s a new kind of camera that produces a single exposure over a span of 100 years. People hide these cameras throughout their city, providing a means for the next generation to observe the decisions that citizens make about their urban environment: decisions about development and gentrification and sustainability. In a sense, these devices are intergenerational surveillance cameras. They prompt people to consider the long-term impact of their actions. They encourage people to act in ways that will change the picture to reflect what they want the next generation to see.

But the truth is that most of my projects—perhaps even the two I’ve just mentioned—combine techniques from philosophy and many other disciplines. In order to map out possible futures for society, especially while navigating the shifting terrain of climate change, the philosopher-explorer needs to be adaptable. And most likely you won’t have all the skills and tools you need. I believe that anyone can become a philosopher-explorer. The practice benefits from more practitioners. No particular abilities are needed, except a capacity for collaboration.

Ayear ago, I was invited by the Fraunhofer Institute for Building Physics to envision the city of the future. Through Fraunhofer’s artist-in-lab program, I had the opportunity to work with leading scientists and engineers, and to run computer simulations and physical experiments on state-of-the-art equipment in Stuttgart and Holzkirchen, Germany.

My starting point was to consider one of the most serious problems faced by cities today: sea level rise. Global sea levels are expected to increase by two-and-a-half meters by the end of the century, and as much as 15 meters in the next 300 years. With 11 percent of the world population living less than 10 meters above the current sea level, many cities will probably be submerged in the future: mega-cities including New York and Shanghai. One likely response is that people will migrate inland, seeking ever higher elevations.

The question I asked myself was this: Would it make more sense to stay put?…(More)”.

Too much information? The new challenge for decision-makers


Daniel Winter at the Financial Times: “…Concern over technology’s capacity both to shrink the world and complicate it has grown steadily since the second world war — little wonder, perhaps, when the existential threats it throws up have expanded from nuclear weapons to encompass climate change (and any consequent geoengineering), gene editing and AI as well. The financial crisis of 2008, in which poorly understood investment instruments made economies totter, has added to the unease over our ability to make sense of things.

From preoccupying cold war planners, attempts to codify best practice in sense-making have gone on to exercise (often profitably) business academics and management consultants, and now draw large audiences online.

Blogs, podcasts and YouTube channels such as Rebel Wisdom and Future Thinkers aim to arm their followers with the tools they need to understand the world, and make the right decisions. Daniel Schmachtenberger is one such voice, whose interviews on YouTube and his podcast Civilization Emerging have reached hundreds of thousands of people.

“Due to increasing technological capacity — increasing population multiplied by increasing impact per person — we’re making more and more consequential choices with worse and worse sense-making to inform those choices,” he says in one video. “Exponential tech is leading to exponential disinformation.” Strengthening individuals’ ability to handle and filter information would go a long way towards improving the “information ecology”, Mr Schmachtenberger argues. People need to get used to handling complex information and should train themselves to be less distracted. “The impulse to say, ‘hey, make it really simple so everyone can get it’ and the impulse to say ‘[let’s] help people actually make sense of the world well’ are different things,” he says. Of course, societies have long been accustomed to handling complexity. No one person can possibly memorise the entirety of US law or be an expert in every field of medicine. Libraries, databases, and professional and academic networks exist to aggregate expertise.

The increasing bombardment of data — the growing amount of evidence that can inform any course of action — pushes such systems to the limit, prompting people to offload the work to computers. Yet this only defers the problem. As AI becomes more sophisticated, its decision-making processes become more opaque. The choice as to whether to trust it — to let it run a self-driving car in a crowded town, say — still rests with us.

Far from being able to outsource all complex thinking to the cloud, Prof Guillén warns that leaders will need to be as skilled as ever at handling and critically evaluating information. It will be vital, he suggests, to build flexibility into the policymaking process.

“The feedback loop between the effects of the policy and how you need to recalibrate the policy in real time becomes so much faster and so much more unpredictable,” he says. “That’s the effect that complex policies produce.” A more piecemeal approach could better suit regulation in fast-moving fields, he argues, with shorter “bursts” of rulemaking, followed by analysis of the effects and then adjustments or additions where necessary.

Yet however adept policymakers become at dealing with a complex world, their task will at some point always resist simplification. That point is where the responsibility resides. Much as we may wish it otherwise, governance will always be as much an art as a science….(More)”.

Official Statistics 4.0: Verified Facts for People in the 21st Century


Book by Walter J. Radermacher: “This book explores official statistics and their social function in modern societies. Digitisation and globalisation are creating completely new opportunities and risks, a context in which facts (can) play an enormously important part if they are produced with a quality that makes them credible and purpose-specific. In order for this to actually happen, official statistics must continue to actively pursue the modernisation of their working methods.

This book is not about the technical and methodological challenges associated with digitisation and globalisation; rather, it focuses on statistical sociology, which scientifically deals with the peculiarities and pitfalls of governing-by-numbers, and assigns statistics a suitable position in the future informational ecosystem. Further, the book provides a comprehensive overview of modern issues in official statistics, embodied in a historical and conceptual framework that endows it with different and innovative perspectives. Central to this work is the quality of statistical information provided by official statistics. The implementation of the UN Sustainable Development Goals in the form of indicators is another driving force in the search for answers, and is addressed here….(More)”

Lack of guidance leaves public services in limbo on AI, says watchdog


Dan Sabbagh at the Guardian: “Police forces, hospitals and councils struggle to understand how to use artificial intelligence because of a lack of clear ethical guidance from the government, according to the country’s only surveillance regulator.

The surveillance camera commissioner, Tony Porter, said he received requests for guidance all the time from public bodies which do not know where the limits lie when it comes to the use of facial, biometric and lip-reading technology.

“Facial recognition technology is now being sold as standard in CCTV systems, for example, so hospitals are having to work out if they should use it,” Porter said. “Police are increasingly wearing body cameras. What are the appropriate limits for their use?

“The problem is that there is insufficient guidance for public bodies to know what is appropriate and what is not, and the public have no idea what is going on because there is no real transparency.”

The watchdog’s comments came as it emerged that Downing Street had commissioned a review led by the Committee on Standards in Public Life, whose chairman had called on public bodies to reveal when they use algorithms in decision making.

Lord Evans, a former MI5 chief, told the Sunday Telegraph that “it was very difficult to find out where AI is being used in the public sector” and that “at the very minimum, it should be visible, and declared, where it has the potential for impacting on civil liberties and human rights and freedoms”.

AI is increasingly deployed across the public sector in surveillance and elsewhere. The high court ruled in September that the police use of automatic facial recognition technology to scan people in crowds was lawful.

Its use by South Wales police was challenged by Ed Bridges, a former Lib Dem councillor, who noticed the cameras when he went out to buy a lunchtime sandwich, but the court held that the intrusion into privacy was proportionate….(More)”.

A Matter of Trust: Higher Education Institutions as Information Fiduciaries in an Age of Educational Data Mining and Learning Analytics


Paper by Kyle M. L. Jones, Alan Rubel and Ellen LeClere: “Higher education institutions are mining and analyzing student data to effect educational, political, and managerial outcomes. Done under the banner of “learning analytics,” this work can—and often does—surface sensitive data and information about, inter alia, a student’s demographics, academic performance, offline and online movements, physical fitness, mental wellbeing, and social network. With these data, institutions and third parties are able to describe student life, predict future behaviors, and intervene to address academic or other barriers to student success (however defined). Learning analytics, consequently, raise serious issues concerning student privacy, autonomy, and the appropriate flow of student data.

We argue that issues around privacy lead to valid questions about the degree to which students should trust their institution to use learning analytics data and other artifacts (algorithms, predictive scores) with their interests in mind. We argue that higher education institutions are paradigms of information fiduciaries. As such, colleges and universities have a special responsibility to their students. In this article, we use the information fiduciary concept to analyze cases when learning analytics violate an institution’s responsibility to its students….(More)”.

Practical Knowledge: Sustaining Massively-Multiplayer Innovation


Paper by Amar Bhide: “Governments and universities are pouring money into more ‘practical’ research – ‘translational’ medicine and ‘evidence-based’ policies in education, public health and economic development, for instance. But just translating or applying science rarely produces practical advances – and an inflexible adherence to the methods of natural or social scientists can do more harm than good. Instead, I propose a general approach – and specific research topics – to advance practical knowledge and study its distinctive contemporary nature….(More)”

Coping with societal challenges: Lessons for innovation policy governance


Paper by Jan Fagerberg & Gernot Hutschenreiter: “Grand societal challenges, such as global warming, can only be adequately dealt with through wide-ranging changes in technology, production and consumption, and ways of life, that is, through innovation. Furthermore, change will involve a variety of sectors or parts of the economy and society, and these change processes must be sufficiently consistent in order to achieve the desired results. This poses huge challenges for policy-making.

In this paper we focus on implications for the governance of innovation policy, i.e., policies influencing a country’s innovation performance. Based on a systemic understanding of innovation and the factors shaping it, the paper highlights the need for effective coordination of policies influencing innovation and what changes in innovation policy governance this may require. To throw further light on how this may be realised the paper discusses evidence on national innovation policy practice, from Finland, the Netherlands and Sweden, respectively, drawing on the country reviews of innovation policy conducted by the OECD as well as other sources. It is concluded that for innovation policy to tackle societal challenges effectively, clearer goals and stronger and better coordination among the various actors – both public and private – whose actions matter for innovation performance will be required. Based on the experiences of the three countries the paper particularly considers the role that comprehensive and inclusive innovation policy councils, with the prime minister in a central role, may play in such a process….(More)”.

One Nation Tracked: An investigation into the smartphone tracking industry


Stuart A. Thompson and Charlie Warzel at the New York Times: “…For brands, following someone’s precise movements is key to understanding the “customer journey” — every step of the process from seeing an ad to buying a product. It’s the Holy Grail of advertising, one marketer said, the complete picture that connects all of our interests and online activity with our real-world actions.

Pointillist location data also has some clear benefits to society. Researchers can use the raw data to provide key insights for transportation studies and government planners. The City Council of Portland, Ore., unanimously approved a deal to study traffic and transit by monitoring millions of cellphones. Unicef announced a plan to use aggregated mobile location data to study epidemics, natural disasters and demographics.

For individual consumers, the value of constant tracking is less tangible. And the lack of transparency from the advertising and tech industries raises still more concerns.

Does a coupon app need to sell second-by-second location data to other companies to be profitable? Does that really justify allowing companies to track millions and potentially expose our private lives?

Data companies say users consent to tracking when they agree to share their location. But those consent screens rarely make clear how the data is being packaged and sold. If companies were clearer about what they were doing with the data, would anyone agree to share it?

What about data collected years ago, before hacks and leaks made privacy a forefront issue? Should it still be used, or should it be deleted for good?

If it’s possible that data stored securely today can easily be hacked, leaked or stolen, is this kind of data worth that risk?

Is all of this surveillance and risk worth it merely so that we can be served slightly more relevant ads? Or so that hedge fund managers can get richer?

The companies profiting from our every move can’t be expected to voluntarily limit their practices. Congress has to step in to protect Americans’ needs as consumers and rights as citizens.

Until then, one thing is certain: We are living in the world’s most advanced surveillance system. This system wasn’t created deliberately. It was built through the interplay of technological advance and the profit motive. It was built to make money. The greatest trick technology companies ever played was persuading society to surveil itself….(More)”.