Data as infrastructure? A study of data sharing legal regimes


Paper by Charlotte Ducuing: “The article discusses the concept of infrastructure in the digital environment, through a study of three data sharing legal regimes: the Public Sector Information Directive (PSI Directive), the discussions on in-vehicle data governance and the freshly adopted data sharing legal regime in the Electricity Directive.

While aiming to contribute to the scholarship on data governance, the article deliberately focuses on network industries. Characterised by the existence of physical infrastructure, they have a special relationship to digitisation and ‘platformisation’ and are exposed to specific risks. Adopting an explanatory methodology, the article exposes that these regimes are based on two close but different sources of inspiration, yet intertwined and left unclear. By targeting entities deemed ‘monopolist’ with regard to the data they create and hold, data sharing obligations are inspired from competition law and especially the essential facility doctrine. On the other hand, beneficiaries appear to include both operators in related markets needing data to conduct their business (except for the PSI Directive), and third parties at large to foster innovation. The latter rationale illustrates what is called here a purposive view of data as infrastructure. The underlying understanding of ‘raw’ data (management) as infrastructure for all to use may run counter the ability for the regulated entities to get a fair remuneration for ‘their’ data.

Finally, the article pleads for more granularity when mandating data sharing obligations depending upon the purpose. Shifting away from a ‘one-size-fits-all’ solution, the regulation of data could also extend to the ensuing context-specific data governance regime, subject to further research…(More)”.

Paging Dr. Google: How the Tech Giant Is Laying Claim to Health Data


Wall Street Journal: “Roughly a year ago, Google offered health-data company Cerner Corp.an unusually rich proposal.

Cerner was interviewing Silicon Valley giants to pick a storage provider for 250 million health records, one of the largest collections of U.S. patient data. Google dispatched former chief executive Eric Schmidt to personally pitch Cerner over several phone calls and offered around $250 million in discounts and incentives, people familiar with the matter say. 

Google had a bigger goal in pushing for the deal than dollars and cents: a way to expand its effort to collect, analyze and aggregate health data on millions of Americans. Google representatives were vague in answering questions about how Cerner’s data would be used, making the health-care company’s executives wary, the people say. Eventually, Cerner struck a storage deal with Amazon.com Inc. instead.

The failed Cerner deal reveals an emerging challenge to Google’s move into health care: gaining the trust of health care partners and the public. So far, that has hardly slowed the search giant.

Google has struck partnerships with some of the country’s largest hospital systems and most-renowned health-care providers, many of them vast in scope and few of their details previously reported. In just a few years, the company has achieved the ability to view or analyze tens of millions of patient health records in at least three-quarters of U.S. states, according to a Wall Street Journal analysis of contractual agreements. 

In certain instances, the deals allow Google to access personally identifiable health information without the knowledge of patients or doctors. The company can review complete health records, including names, dates of birth, medications and other ailments, according to people familiar with the deals.

The prospect of tech giants’ amassing huge troves of health records has raised concerns among lawmakers, patients and doctors, who fear such intimate data could be used without individuals’ knowledge or permission, or in ways they might not anticipate. 

Google is developing a search tool, similar to its flagship search engine, in which patient information is stored, collated and analyzed by the company’s engineers, on its own servers. The portal is designed for use by doctors and nurses, and eventually perhaps patients themselves, though some Google staffers would have access sooner. 

Google executives and some health systems say that detailed data sharing has the potential to improve health outcomes. Large troves of data help fuel algorithms Google is creating to detect lung cancer, eye disease and kidney injuries. Hospital executives have long sought better electronic record systems to reduce error rates and cut down on paperwork….

Legally, the information gathered by Google can be used for purposes beyond diagnosing illnesses, under laws enacted during the dial-up era. U.S. federal privacy laws make it possible for health-care providers, with little or no input from patients, to share data with certain outside companies. That applies to partners, like Google, with significant presences outside health care. The company says its intentions in health are unconnected with its advertising business, which depends largely on data it has collected on users of its many services, including email and maps.

Medical information is perhaps the last bounty of personal data yet to be scooped up by technology companies. The health data-gathering efforts of other tech giants such as Amazon and International Business Machines Corp. face skepticism from physician and patient advocates. But Google’s push in particular has set off alarm bells in the industry, including over privacy concerns. U.S. senators, as well as health-industry executives, are questioning Google’s expansion and its potential for commercializing personal data….(More)”.

On Digital Disinformation and Democratic Myths


 David Karpf at MediaWell: “…How many votes did Cambridge Analytica affect in the 2016 presidential election? How much of a difference did the company actually make?

Cambridge Analytica has become something of a Rorschach test among those who pay attention to digital disinformation and microtargeted propaganda. Some hail the company as a digital Svengali, harnessing the power of big data to reshape the behavior of the American electorate. Others suggest the company was peddling digital snake oil, with outlandish marketing claims that bore little resemblance to their mundane product.

One thing is certain: the company has become a household name, practically synonymous with disinformation and digital propaganda in the aftermath of the 2016 election. It has claimed credit for the surprising success of the Brexit referendum and for the Trump digital strategy. Journalists such as Carole Cadwalladr and Hannes Grasseger and Mikael Krogerus have published longform articles that dive into the “psychographic” breakthroughs that the company claims to have made. Cadwalladr also exposed the links between the company and a network of influential conservative donors and political operatives. Whistleblower Chris Wylie, who worked for a time as the company’s head of research, further detailed how it obtained a massive trove of Facebook data on tens of millions of American citizens, in violation of Facebook’s terms of service. The Cambridge Analytica scandal has been a driving force in the current “techlash,” and has been the topic of congressional hearings, documentaries, mass-market books, and scholarly articles.

The reasons for concern are numerous. The company’s own marketing materials boasted about radical breakthroughs in psychographic targeting—developing psychological profiles of every US voter so that political campaigns could tailor messages to exploit psychological vulnerabilities. Those marketing claims were paired with disturbing revelations about the company violating Facebook’s terms of service to scrape tens of millions of user profiles, which were then compiled into a broader database of US voters. Cambridge Analytica behaved unethically. It either broke a lot of laws or demonstrated that old laws needed updating. When the company shut down, no one seemed to shed a tear.

But what is less clear is just how different Cambridge Analytica’s product actually was from the type of microtargeted digital advertisements that every other US electoral campaign uses. Many of the most prominent researchers warning the public about how Cambridge Analytica uses our digital exhaust to “hack our brains” are marketing professors, more accustomed to studying the impact of advertising in commerce than in elections. The political science research community has been far more skeptical. An investigation from Nature magazine documented that the evidence of Cambridge Analytica’s independent impact on voter behavior is basically nonexistent (Gibney 2018). There is no evidence that psychographic targeting actually works at the scale of the American electorate, and there is also no evidence that Cambridge Analytica in fact deployed psychographic models while working for the Trump campaign. The company clearly broke Facebook’s terms of service in acquiring its massive Facebook dataset. But it is not clear that the massive dataset made much of a difference.

At issue in the Cambridge Analytica case are two baseline assumptions about political persuasion in elections. First, what should be our point of comparison for digital propaganda in elections? Second, how does political persuasion in elections compare to persuasion in commercial arenas and marketing in general?…(More)”.

Navigation Apps Changed the Politics of Traffic


Essay by Laura Bliss: “There might not be much “weather” to speak of in Los Angeles, but there is traffic. It’s the de facto small talk upon arrival at meetings or cocktail parties, comparing journeys through the proverbial storm. And in certain ways, traffic does resemble the daily expressions of climate. It follows diurnal and seasonal patterns; it shapes, and is shaped, by local conditions. There are unexpected downpours: accidents, parades, sports events, concerts.

Once upon a time, if you were really savvy, you could steer around the thunderheads—that is, evade congestion almost entirely.

Now, everyone can do that, thanks to navigation apps like Waze, which launched in 2009 by a startup based in suburban Tel Aviv with the aspiration to save drivers five minutes on every trip by outsmarting traffic jams. Ten years later, the navigation app’s current motto is to “eliminate traffic”—to untie the knots of urban congestion once and for all. Like Google Maps, Apple Maps, Inrix, and other smartphone-based navigation tools, its routing algorithm weaves user locations with other sources of traffic data, quickly identifying the fastest routes available at any given moment.

Waze often describes itself in terms of the social goods it promotes. It likes to highlight the dedication of its active participants, who pay it forward to less-informed drivers behind them, as well as its willingness to share incident reports with city governments so that, for example, traffic engineers can rejigger stop lights or crack down on double parking. “Over the last 10 years, we’ve operated from a sense of civic responsibility within our means,” wrote Waze’s CEO and founder Noam Bardin in April 2018.

But Waze is a business, not a government agency. The goal is to be an indispensable service for its customers, and to profit from that. And it isn’t clear that those objectives align with a solution for urban congestion as a whole. This gets to the heart of the problem with any navigation app—or, for that matter, any traffic fix that prioritizes the needs of independent drivers over what’s best for the broader system. Managing traffic requires us to work together. Apps tap into our selfish desires….(More)”.

This essay is adapted from SOM Thinkers: The Future of Transportation, published by Metropolis Books.

Copy, Paste, Legislate


The Center for Public Integrity: “Do you know if a bill introduced in your statehouse — it might govern who can fix your shattered iPhone screen or whether you can still sue a pedophile priest years later — was actually written by your elected lawmakers? Use this new tool to find out.

Spoiler alert The answer may well be no.

Thousands of pieces of “model legislation” are drafted each year by business organizations and special interest groups and distributed to state lawmakers for introduction. These copycat bills influence policymaking across the nation, state by state, often with little scrutiny. This news application was developed by the Center for Public Integrity, part of a year-long collaboration with USA TODAY and the Arizona Republic to bring the practice into the light….(More)”.

Open Democracy and Digital Technologies


Paper by Hélène Landemore: “…looks at the connection between democratic theory and technological constraints, and argues for renovating our paradigm of democracy to make the most of the technological opportunities offered by the digital revolution. The most attractive normative theory of democracy currently available—Habermas’ model of a two-track deliberative sphere—is, for all its merits, a self-avowed rationalization of representative democracy, a system born in the 18th century under different epistemological, conceptual, and technological constraints. In this
paper I show the limits of this model and defend instead an alternative paradigm of democracy I call “open democracy,” in which digital technologies are assumed to make it possible to transcend a number of dichotomies, including that between ordinary citizens and democratic representatives.

Rather than just imagining a digitized version or extension of existing institutions and practices—representative democracy as we know it—I thus take the opportunities offered by the digital revolution (its technological “affordances,” in the jargon) to envision new democratic institutions and means of democratic empowerment, some of which are illustrated in the vignette with which this paper started. In other words, rather that start from what is— our electoral democracies, I start from what democracy could mean, if we reinvented it more or less from scratch today with the help of digital technologies.

The first section lays out the problems with and limits of our current practice and theory of democracy.


The second section traces these problems to conceptual design flaws partially induced by 18th century conceptual, epistemological, and technological constraints.


Section three lays out an alternative theory of democracy I call “open democracy,” which avoids some of these design flaws, and introduces the institutional features of this new paradigm that are specifically enabled by digital technologies: deliberation and democratic representation….(More)”.

Taming the Beast: Harnessing Blockchains in Developing Country Governments


Paper by Raúl Zambrano: “Amid pressing demands to achieve critical sustainable development goals, governments in developing countries face the additional complex task of embracing new digital technologies such as blockchains. This paper develops a framework interlinking development, technology, and government institutions that policymakers and development practitioners could use to address such a conundrum. State capacity and democratic governance are introduced as drivers in the overall analysis. With this in hand, blockchain technology is revisited from the perspective of governments in the Global South, identifying in the process key traits and proposing a new typology. An overview of the status of blockchain deployments in the Global South follows, complemented by a closer look at country examples to distill trends, patterns and risks. The paper closes with a discussion of the findings, highlighting both challenges and opportunities for governments. It also provides basic guidance to development practitioners interested in enhancing current programming using blockchains as an enabler….(More)”

Meaningful Inefficiencies: Civic Design in an Age of Digital Expediency


Book by Eric Gordon and Gabriel Mugar: “Public trust in the institutions that mediate civic life-from governing bodies to newsrooms-is low. In facing this challenge, many organizations assume that ensuring greater efficiency will build trust. As a result, these organizations are quick to adopt new technologies to enhance what they do, whether it’s a new app or dashboard. However, efficiency, or charting a path to a goal with the least amount of friction, is not itself always built on a foundation of trust.

Meaningful Inefficiencies is about the practices undertaken by civic designers that challenge the normative applications of “smart technologies” in order to build or repair trust with publics. Based on over sixty interviews with change makers in public serving organizations throughout the United States, as well as detailed case studies, this book provides a practical and deeply philosophical picture of civic life in transition. The designers in this book are not professional designers, but practitioners embedded within organizations who have adopted an approach to public engagement Eric Gordon and Gabriel Mugar call “meaningful inefficiencies,” or the deliberate design of less efficient over more efficient means of achieving some ends. This book illustrates how civic designers are creating meaningful inefficiencies within public serving organizations. It also encourages a rethinking of how innovation within these organizations is understood, applied, and sought after. Different than market innovation, civic innovation is not just about invention and novelty; it is concerned with building communities around novelty, and cultivating deep and persistent trust.

At its core, Meaningful Inefficiencies underlines that good civic innovation will never just involve one single public good, but must instead negotiate a plurality of publics. In doing so, it creates the conditions for those publics to play, resulting in people truly caring for the world. Meaningful Inefficiencies thus presents an emergent and vitally needed approach to creating civic life at a moment when smart and efficient are the dominant forces in social and organizational change….(More)”.

Dollars for Profs: How to Investigate Professors’ Conflicts of Interest


ProPublica: “When professors moonlight, the income may influence their research and policy views. Although most universities track this outside work, the records have rarely been accessible to the public, potentially obscuring conflicts of interests.

That changed last month when ProPublica launched Dollars for Profs, an interactive database that, for the first time ever, allows you to look up more than 37,000 faculty and staff disclosures from about 20 public universities and the National Institutes of Health.

We believe there are hundreds of stories in this database, and we hope to tell as many as possible. Already, we’ve revealed how the University of California’s weak monitoring of conflicts has allowed faculty members to underreport their outside income, potentially depriving the university of millions of dollars. In addition, using a database of NIH records, we found that health researchers have acknowledged a total of at least $188 million in financial conflicts of interest since 2012.

We hope journalists all over the country will look into the database and find more. Here are tips for local education reporters, college newspaper journalists and anyone else who wants to hold academia accountable on how to dig into the disclosures….(More)”.

The Case for an Institutionally Owned Knowledge Infrastructure


Article by James W. Weis, Amy Brand and Joi Ito: “Science and technology are propelled forward by the sharing of knowledge. Yet despite their vital importance in today’s innovation-driven economy, our knowledge infrastructures have failed to scale with today’s rapid pace of research and discovery.

For example, academic journals, the dominant dissemination platforms of scientific knowledge, have not been able to take advantage of the linking, transparency, dynamic communication and decentralized authority and review that the internet enables. Many other knowledge-driven sectors, from journalism to law, suffer from a similar bottleneck — caused not by a lack of technological capacity, but rather by an inability to design and implement efficient, open and trustworthy mechanisms of information dissemination.

Fortunately, growing dissatisfaction with current knowledge-sharing infrastructures has led to a more nuanced understanding of the requisite features that such platforms must provide. With such an understanding, higher education institutions around the world can begin to recapture the control and increase the utility of the knowledge they produce.

When the World Wide Web emerged in the 1990s, an era of robust scholarship based on open sharing of scientific advancements appeared inevitable. The internet — initially a research network — promised a democratization of science, universal access to the academic literature and a new form of open publishing that supported the discovery and reuse of knowledge artifacts on a global scale. Unfortunately, however, that promise was never realized. Universities, researchers and funding agencies, for the most part, failed to organize and secure the investment needed to build scalable knowledge infrastructures, and publishing corporations moved in to solidify their position as the purveyors of knowledge.

In the subsequent decade, such publishers have consolidated their hold. By controlling the most prestigious journals, they have been able to charge for access — extracting billions of dollars in subscription fees while barring much of the world from the academic literature. Indeed, some of the world’s wealthiest academic institutions are no longer able or willing to pay the subscription costs required.

Further, by controlling many of the most prestigious journals, publishers have also been able to position themselves between the creation and consumption of research, and so wield enormous power over peer review and metrics of scientific impact. Thus, they are able to significantly influence academic reputation, hirings, promotions, career progressions and, ultimately, the direction of science itself.

But signs suggest that the bright future envisioned in the early days of the internet is still within reach. Increasing awareness of, and dissatisfaction with, the many bottlenecks that the commercial monopoly on research information has imposed are stimulating new strategies for developing the future’s knowledge infrastructures. One of the most promising is the shift toward infrastructures created and supported by academic institutions, the original creators of the information being shared, and nonprofit consortia like the Collaborative Knowledge Foundation and the Center for Open Science.

Those infrastructures should fully exploit the technological capabilities of the World Wide Web to accelerate discovery, encourage more research support and better structure and transmit knowledge. By aligning academic incentives with socially beneficial outcomes, such a system could enrich the public while also amplifying the technological and societal impact of investment in research and innovation.

We’ve outlined below the three areas in which a shift to an academically owned platforms would yield the highest impact.

  • Truly Open Access
  • Meaningful Impact Metrics
  • Trustworthy Peer Review….(More)”.