No One Owns Data


Paper by Lothar Determann: “Businesses, policy makers, and scholars are calling for property rights in data. They currently focus particularly on the vast amounts of data generated by connected cars, industrial machines, artificial intelligence, toys and other devices on the Internet of Things (IoT). This data is personal to numerous parties who are associated with a connected device, for example, the driver of a connected car, its owner and passengers, as well as other traffic participants. Manufacturers, dealers, independent providers of auto parts and services, insurance companies, law enforcement agencies and many others are also interested in this data. Various parties are actively staking their claims to data on the Internet of Things, as they are mining data, the fuel of the digital economy.

Stakeholders in digital markets often frame claims, negotiations and controversies regarding data access as one of ownership. Businesses regularly assert and demand that they own data. Individual data subjects also assume that they own data about themselves. Policy makers and scholars focus on how to redistribute ownership rights to data. Yet, upon closer review, it is very questionable whether data is—or should be—subject to any property rights. This article unambiguously answers the question in the negative, both with respect to existing law and future lawmaking, in the United States as in the European Union, jurisdictions with notably divergent attitudes to privacy, property and individual freedoms….

The article begins with a brief review of the current landscape of the Internet of Things notes explosive growth of data pools generated by connected devices, artificial intelligence, big data analytics tools and other information technologies. Part 1 lays the foundation for examining concrete current legal and policy challenges in the remainder of the article. Part 2 supplies conceptual differentiation and definitions with respect to “data” and “information” as the subject of rights and interests. Distinctions and definitional clarity serve as the basis for examining the purposes and reach of existing property laws in Part 3, including real property, personal property and intellectual property laws. Part 4 analyzes the effect of data-related laws that do not grant property rights. Part 5 examines how the interests of the various stakeholders are protected or impaired by the current framework of data-related laws to identify potential gaps that could warrant additional property rights. Part 6 examines policy considerations for and against property rights in data. Part 7 concludes that no one owns data and no one should own data….(More)”.

Do Academic Journals Favor Researchers from Their Own Institutions?


Yaniv Reingewertz and Carmela Lutmar at Harvard Business Review: “Are academic journals impartial? While many would suggest that academic journals work for the advancement of knowledge and science, we show this is not always the case. In a recent study, we find that two international relations (IR) journals favor articles written by authors who share the journal’s institutional affiliation. We term this phenomenon “academic in-group bias.”

In-group bias is a well-known phenomenon that is widely documented in the psychological literature. People tend to favor their group, whether it is their close family, their hometown, their ethnic group, or any other group affiliation. Before our study, the evidence regarding academic in-group bias was scarce, with only one studyfinding academic in-group bias in law journals. Studies from economics found mixedresults. Our paper provides evidence of academic in-group bias in IR journals, showing that this phenomenon is not specific to law. We also provide tentative evidence which could potentially resolve the conflict in economics, suggesting that these journals might also exhibit in-group bias. In short, we show that academic in-group bias is general in nature, even if not necessarily large in scope….(More)”.

Online Political Microtargeting: Promises and Threats for Democracy


Frederik Zuiderveen Borgesius et al in Utrecht Law Review: “Online political microtargeting involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to show people-targeted political advertisements. Online political microtargeting is widely used in the US; Europe may not be far behind.

This paper maps microtargeting’s promises and threats to democracy. For example, microtargeting promises to optimise the match between the electorate’s concerns and political campaigns, and to boost campaign engagement and political participation. But online microtargeting could also threaten democracy. For instance, a political party could, misleadingly, present itself as a different one-issue party to different individuals. And data collection for microtargeting raises privacy concerns. We sketch possibilities for policymakers if they seek to regulate online political microtargeting. We discuss which measures would be possible, while complying with the right to freedom of expression under the European Convention on Human Rights….(More)”.

Data journalism and the ethics of publishing Twitter data


Matthew L. Williams at Data Driven Journalism: “Collecting and publishing data collected from social media sites such as Twitter are everyday practices for the data journalist. Recent findings from Cardiff University’s Social Data Science Lab question the practice of publishing Twitter content without seeking some form of informed consent from users beforehand. Researchers found that tweets collected around certain topics, such as those related to terrorism, political votes, changes in the law and health problems, create datasets that might contain sensitive content, such as extreme political opinion, grossly offensive comments, overly personal revelations and threats to life (both to oneself and to others). Handling these data in the process of analysis (such as classifying content as hateful and potentially illegal) and reporting has brought the ethics of using social media in social research and journalism into sharp focus.

Ethics is an issue that is becoming increasingly salient in research and journalism using social media data. The digital revolution has outpaced parallel developments in research governance and agreed good practice. Codes of ethical conduct that were written in the mid twentieth century are being relied upon to guide the collection, analysis and representation of digital data in the twenty-first century. Social media is particularly ethically challenging because of the open availability of the data (particularly from Twitter). Many platforms’ terms of service specifically state users’ data that are public will be made available to third parties, and by accepting these terms users legally consent to this. However, researchers and data journalists must interpret and engage with these commercially motivated terms of service through a more reflexive lens, which implies a context sensitive approach, rather than focusing on the legally permissible uses of these data.

Social media researchers and data journalists have experimented with data from a range of sources, including Facebook, YouTube, Flickr, Tumblr and Twitter to name a few. Twitter is by far the most studied of all these networks. This is because Twitter differs from other networks, such as Facebook, that are organised around groups of ‘friends’, in that it is more ‘open’ and the data (in part) are freely available to researchers. This makes Twitter a more public digital space that promotes the free exchange of opinions and ideas. Twitter has become the primary space for online citizens to publicly express their reaction to events of national significance, and also the primary source of data for social science research into digital publics.

The Twitter streaming API provides three levels of data access: the free random 1% that provides ~5M tweets daily and the random 10% and 100% (chargeable or free to academic researchers upon request). Datasets on social interactions of this scale, speed and ease of access have been hitherto unrealisable in the social sciences and journalism, and have led to a flood of journal articles and news pieces, many of which include tweets with full text content and author identity without informed consent. This is presumably because of Twitter’s ‘open’ nature, which leads to the assumption that ‘these are public data’ and using it does not require the rigor and scrutiny of an ethical oversight. Even when these data are scrutinised, journalists don’t need to be convinced by the ‘public data’ argument, due to the lack of a framework to evaluate the potential harms to users. The Social Data Science Lab takes a more ethically reflexive approach to the use of social media data in social research, and carefully considers users’ perceptions, online context and the role of algorithms in estimating potentially sensitive user characteristics.

recent Lab survey conducted into users’ perceptions of the use of their social media posts found the following:

  • 94% were aware that social media companies had Terms of Service
  • 65% had read the Terms of Service in whole or in part
  • 76% knew that when accepting Terms of Service they were giving permission for some of their information to be accessed by third parties
  • 80% agreed that if their social media information is used in a publication they would expect to be asked for consent
  • 90% agreed that if their tweets were used without their consent they should be anonymized…(More)”.

Just and Unjust Leaks: When to Spill Secrets


 at Foreign Affairs: “All governments, all political parties, and all politicians keep secrets and tell lies. Some lie more than others, and those differences are important, but the practice is general. And some lies and secrets may be justified, whereas others may not. Citizens, therefore, need to know the difference between just and unjust secrets and between just and unjust deception before they can decide when it may be justifiable for someone to reveal the secrets or expose the lies—when leaking confidential information, releasing classified documents, or blowing the whistle on misconduct may be in the public interest or, better, in the interest of democratic government.

Revealing official secrets and lies involves a form of moral risk-taking: whistleblowers may act out of a sense of duty or conscience, but the morality of their actions can be judged only by their fellow citizens, and only after the fact. This is often a difficult judgment to make—and has probably become more difficult in the Trump era.

LIES AND DAMNED LIES

A quick word about language: “leaker” and “whistleblower” are overlapping terms, but they aren’t synonyms. A leaker, in this context, anonymously reveals information that might embarrass officials or open up the government’s internal workings to unwanted public scrutiny. In Washington, good reporters cultivate sources inside every presidential administration and every Congress and hope for leaks. A whistleblower reveals what she believes to be immoral or illegal official conduct to her bureaucratic superiors or to the public. Certain sorts of whistle-blowing, relating chiefly to mismanagement and corruption, are protected by law; leakers are not protected, nor are whistleblowers who reveal state secrets…(More)”.

Free Speech in the Filter Age


Alexandra Borchardt at Project Syndicate: “In a democracy, the rights of the many cannot come at the expense of the rights of the few. In the age of algorithms, government must, more than ever, ensure the protection of vulnerable voices, even erring on victims’ side at times.

Germany’s Network Enforcement Act – according to which social-media platforms like Facebook and YouTube could be fined €50 million ($63 million) for every “obviously illegal” post within 24 hours of receiving a notification – has been controversial from the start. After it entered fully into effect in January, there was a tremendous outcry, with critics from all over the political map arguing that it was an enticement to censorship. Government was relinquishing its powers to private interests, they protested.

So, is this the beginning of the end of free speech in Germany?

Of course not. To be sure, Germany’s Netzwerkdurchsetzungsgesetz (or NetzDG) is the strictest regulation of its kind in a Europe that is growing increasingly annoyed with America’s powerful social-media companies. And critics do have some valid points about the law’s weaknesses. But the possibilities for free expression will remain abundant, even if some posts are deleted mistakenly.

The truth is that the law sends an important message: democracies won’t stay silent while their citizens are exposed to hateful and violent speech and images – content that, as we know, can spur real-life hate and violence. Refusing to protect the public, especially the most vulnerable, from dangerous content in the name of “free speech” actually serves the interests of those who are already privileged, beginning with the powerful companies that drive the dissemination of information.

Speech has always been filtered. In democratic societies, everyone has the right to express themselves within the boundaries of the law, but no one has ever been guaranteed an audience. To have an impact, citizens have always needed to appeal to – or bypass – the “gatekeepers” who decide which causes and ideas are relevant and worth amplifying, whether through the media, political institutions, or protest.

The same is true today, except that the gatekeepers are the algorithms that automatically filter and rank all contributions. Of course, algorithms can be programmed any way companies like, meaning that they may place a premium on qualities shared by professional journalists: credibility, intelligence, and coherence.

But today’s social-media platforms are far more likely to prioritize potential for advertising revenue above all else. So the noisiest are often rewarded with a megaphone, while less polarizing, less privileged voices are drowned out, even if they are providing the smart and nuanced perspectives that can truly enrich public discussions….(More)”.

An AI That Reads Privacy Policies So That You Don’t Have To


Andy Greenberg at Wired: “…Today, researchers at Switzerland’s Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan announced the release of Polisis—short for “privacy policy analysis”—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any online service’s privacy policy, so you don’t have to.

In about 30 seconds, Polisis can read a privacy policy it’s never seen before and extract a readable summary, displayed in a graphic flow chart, of what kind of data a service collects, where that data could be sent, and whether a user can opt out of that collection or sharing. Polisis’ creators have also built a chat interface they call Pribot that’s designed to answer questions about any privacy policy, intended as a sort of privacy-focused paralegal advisor. Together, the researchers hope those tools can unlock the secrets of how tech firms use your data that have long been hidden in plain sight….

Polisis isn’t actually the first attempt to use machine learning to pull human-readable information out of privacy policies. Both Carnegie Mellon University and Columbia have made their own attempts at similar projects in recent years, points out NYU Law Professor Florencia Marotta-Wurgler, who has focused her own research on user interactions with terms of service contracts online. (One of her own studies showed that only .07 percent of users actually click on a terms of service link before clicking “agree.”) The Usable Privacy Policy Project, a collaboration that includes both Columbia and CMU, released its own automated tool to annotate privacy policies just last month. But Marotta-Wurgler notes that Polisis’ visual and chat-bot interfaces haven’t been tried before, and says the latest project is also more detailed in how it defines different kinds of data. “The granularity is really nice,” Marotta-Wurgler says. “It’s a way of communicating this information that’s more interactive.”…(More)”.

The People’s Right to Know and State Secrecy


Dorota Mokrosinska at the Canadian Journal of Law and Jurisprudence: “Among the classic arguments which advocates of open government use to fight government secrecy is the appeal to a “people’s right to know.” I argue that the employment of this idea as a conceptual weapon against state secrecy misfires. I consider two prominent arguments commonly invoked to support the people’s right to know government-held information: an appeal to human rights and an appeal to democratic citizenship. While I concede that both arguments ground the people’s right to access government information, I argue that they also limit this right and in limiting it, they establish a domain of state secrecy. The argument developed in the essay provides a novel interpretation of Dennis Thompson’s claim, who in his seminal work on the place of secrecy in democratic governance, has argued that some of the best reasons for secrecy are the same reasons that argue for openness and against secrecy….(More)”.

Is full transparency good for democracy?


Austin Sarat at The Conversation: “Public knowledge about what government officials do is essential in a representative democracy. Without such knowledge, citizens cannot make informed choices about who they want to represent them or hold public officials accountable.

Political theorists have traced arguments about publicity and democracy back to ancient Greece and Rome. Those arguments subsequently flowered in the middle of the 19th century.

For example, writing about British parliamentary democracy, the famous philosopher Jeremy Bentham urged that legislative deliberation be carried out in public. Public deliberation, in his view, would be an important factor in “constraining the members of the assembly to perform their duty” and in securing “the confidence of the people.”

Moreover, Bentham noted that “suspicion always attaches to mystery.”

Even so, Bentham did not think the public had an unqualified “right to know.” As he put it, “It is not proper to make the law of publicity absolute.” Bentham acknowledged that publicity “ought to be suspended” when informing the public would “favor the projects of an enemy.”

Well into the 20th century, the U.S. and other democracies existed with far less public transparency than Bentham advocated.

Push for transparency

The authors of a 2016 U.S. Congressional report on access to government information observed that, “Throughout the first 150 years of the federal government, access to government information does not appear to have been a major issue for the federal branches or the public.” In short, the public generally did not demand more information than the government provided….

For at least the last 50 years, American legal and political institutions have tried to find a balance between publicity and secrecy. The courts have identified limits to claims of executive privilege like those made by President Nixon during Watergate. Watergate also led Congress in 1978 to pass the Foreign Intelligence Surveillance Act, or FISA. That act created a special court, whose procedures were highlighted in the Nunes memo. The FISA court authorizes collection of intelligence information between foreign powers and “agents of foreign powers.”

Finding the proper balance between making information public in order to foster accountability and the government’s concern for national security is not easy. Just look to the heated debates that accompanied passage of the Patriot Act and what WikiLeaks did in 2010 when it published more than 300,000 classified U.S. Army field reports.

Americans can make little progress in resolving such debates until they can get beyond the cynical, partisan use of slogans like “the public’s right to know” and “full transparency” by President Trump’s loyalists. Now more than ever, Americans must understand how and when transparency contributes to the strength and vitality of our democratic institutions and how and when the invocation of the public’s right to know is being used to erode them….(More)”.

Behavioral Analysis of International Law: On Lawmaking and Nudging


Article by Doron Teichman and Eyal Zamir: “… examines the application of insights from behavioral economics to the area of international law. It reviews the unique challenges facing such application and demonstrates the contribution of behavioral findings to the understanding of lawmaking, the use of nudges, and states’ practices in the international arena.

In the sphere of lawmaking, the article first highlights the contribution of experimental game theory to understanding international customary law. It then analyzes the psychological mechanisms underpinning the advancement of treaty law through the use of deadlines, grandfather provisions, deferred implementation, and temporary arrangements. More generally, it provides insight into the processes through which international soft law evolves into hard law.

The article then argues that in the absence of a central legislative body or strong enforcement mechanisms, nudges (that is, low-cost, choice-preserving, behaviorally informed regulatory tools) can play a particularly important role in influencing the behavior of states and other entities. The article describes the current use of nudges, such as opt-in and opt-out arrangements in multilateral treaties, goal settings, and international rankings—and calls for further employment of such means.

Finally, the article suggests that the extent to which states comply with international norms may be explained by phenomena such as loss aversion and the identifiability effect; and that further insight into states’ (non)compliance may be gained from the emerging research in behavioral ethics…(More)”