An AI That Reads Privacy Policies So That You Don’t Have To


Andy Greenberg at Wired: “…Today, researchers at Switzerland’s Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan announced the release of Polisis—short for “privacy policy analysis”—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any online service’s privacy policy, so you don’t have to.

In about 30 seconds, Polisis can read a privacy policy it’s never seen before and extract a readable summary, displayed in a graphic flow chart, of what kind of data a service collects, where that data could be sent, and whether a user can opt out of that collection or sharing. Polisis’ creators have also built a chat interface they call Pribot that’s designed to answer questions about any privacy policy, intended as a sort of privacy-focused paralegal advisor. Together, the researchers hope those tools can unlock the secrets of how tech firms use your data that have long been hidden in plain sight….

Polisis isn’t actually the first attempt to use machine learning to pull human-readable information out of privacy policies. Both Carnegie Mellon University and Columbia have made their own attempts at similar projects in recent years, points out NYU Law Professor Florencia Marotta-Wurgler, who has focused her own research on user interactions with terms of service contracts online. (One of her own studies showed that only .07 percent of users actually click on a terms of service link before clicking “agree.”) The Usable Privacy Policy Project, a collaboration that includes both Columbia and CMU, released its own automated tool to annotate privacy policies just last month. But Marotta-Wurgler notes that Polisis’ visual and chat-bot interfaces haven’t been tried before, and says the latest project is also more detailed in how it defines different kinds of data. “The granularity is really nice,” Marotta-Wurgler says. “It’s a way of communicating this information that’s more interactive.”…(More)”.

The People’s Right to Know and State Secrecy


Dorota Mokrosinska at the Canadian Journal of Law and Jurisprudence: “Among the classic arguments which advocates of open government use to fight government secrecy is the appeal to a “people’s right to know.” I argue that the employment of this idea as a conceptual weapon against state secrecy misfires. I consider two prominent arguments commonly invoked to support the people’s right to know government-held information: an appeal to human rights and an appeal to democratic citizenship. While I concede that both arguments ground the people’s right to access government information, I argue that they also limit this right and in limiting it, they establish a domain of state secrecy. The argument developed in the essay provides a novel interpretation of Dennis Thompson’s claim, who in his seminal work on the place of secrecy in democratic governance, has argued that some of the best reasons for secrecy are the same reasons that argue for openness and against secrecy….(More)”.

Is full transparency good for democracy?


Austin Sarat at The Conversation: “Public knowledge about what government officials do is essential in a representative democracy. Without such knowledge, citizens cannot make informed choices about who they want to represent them or hold public officials accountable.

Political theorists have traced arguments about publicity and democracy back to ancient Greece and Rome. Those arguments subsequently flowered in the middle of the 19th century.

For example, writing about British parliamentary democracy, the famous philosopher Jeremy Bentham urged that legislative deliberation be carried out in public. Public deliberation, in his view, would be an important factor in “constraining the members of the assembly to perform their duty” and in securing “the confidence of the people.”

Moreover, Bentham noted that “suspicion always attaches to mystery.”

Even so, Bentham did not think the public had an unqualified “right to know.” As he put it, “It is not proper to make the law of publicity absolute.” Bentham acknowledged that publicity “ought to be suspended” when informing the public would “favor the projects of an enemy.”

Well into the 20th century, the U.S. and other democracies existed with far less public transparency than Bentham advocated.

Push for transparency

The authors of a 2016 U.S. Congressional report on access to government information observed that, “Throughout the first 150 years of the federal government, access to government information does not appear to have been a major issue for the federal branches or the public.” In short, the public generally did not demand more information than the government provided….

For at least the last 50 years, American legal and political institutions have tried to find a balance between publicity and secrecy. The courts have identified limits to claims of executive privilege like those made by President Nixon during Watergate. Watergate also led Congress in 1978 to pass the Foreign Intelligence Surveillance Act, or FISA. That act created a special court, whose procedures were highlighted in the Nunes memo. The FISA court authorizes collection of intelligence information between foreign powers and “agents of foreign powers.”

Finding the proper balance between making information public in order to foster accountability and the government’s concern for national security is not easy. Just look to the heated debates that accompanied passage of the Patriot Act and what WikiLeaks did in 2010 when it published more than 300,000 classified U.S. Army field reports.

Americans can make little progress in resolving such debates until they can get beyond the cynical, partisan use of slogans like “the public’s right to know” and “full transparency” by President Trump’s loyalists. Now more than ever, Americans must understand how and when transparency contributes to the strength and vitality of our democratic institutions and how and when the invocation of the public’s right to know is being used to erode them….(More)”.

Behavioral Analysis of International Law: On Lawmaking and Nudging


Article by Doron Teichman and Eyal Zamir: “… examines the application of insights from behavioral economics to the area of international law. It reviews the unique challenges facing such application and demonstrates the contribution of behavioral findings to the understanding of lawmaking, the use of nudges, and states’ practices in the international arena.

In the sphere of lawmaking, the article first highlights the contribution of experimental game theory to understanding international customary law. It then analyzes the psychological mechanisms underpinning the advancement of treaty law through the use of deadlines, grandfather provisions, deferred implementation, and temporary arrangements. More generally, it provides insight into the processes through which international soft law evolves into hard law.

The article then argues that in the absence of a central legislative body or strong enforcement mechanisms, nudges (that is, low-cost, choice-preserving, behaviorally informed regulatory tools) can play a particularly important role in influencing the behavior of states and other entities. The article describes the current use of nudges, such as opt-in and opt-out arrangements in multilateral treaties, goal settings, and international rankings—and calls for further employment of such means.

Finally, the article suggests that the extent to which states comply with international norms may be explained by phenomena such as loss aversion and the identifiability effect; and that further insight into states’ (non)compliance may be gained from the emerging research in behavioral ethics…(More)”

Upholding Democracy Amid the Challenges of New Technology


Paper by Eyal Benvenisti at the The European Journal of International Law: “The law on global governance that emerged after the Second World War was grounded in irrefutable trust in international organizations and an assumption that their subjection to legal discipline and judicial review would be unnecessary and, in fact, detrimental to their success. The law that evolved systematically insulated international organizations from internal and external scrutiny and absolved them of any inherent legal obligations – and, to a degree, continues to do so.

Indeed, it was only well after the end of the Cold War that mistrust in global governance began to trickle through into the legal discourse and the realization gradually took hold that the operation of international organizations needed to be subject to the disciplining power of the law. Since the mid-1990s, scholars have sought to identify the conditions under which trust in global bodies can be regained, mainly by borrowing and adapting domestic public law precepts that emphasize accountability through communications with those affected.

Today, although a ‘culture of accountability’ may have taken root, its legal tools are still shaping up and are often contested. More importantly, these communicative tools are ill-equipped to address the new modalities of governance that are based on decision-making by machines using raw data (rather than two-way exchange with stakeholders) as their input.

The new information and communication technologies challenge the foundational premise of the accountability school – that ‘the more communication, the better’ – as voters-turned-users obtain their information from increasingly fragmented and privatized marketplaces of ideas that are manipulated for economic and political gain.

In this article, I describe and analyse how the law has evolved to acknowledge the need for accountability, how it has designed norms for this purpose and continues in this endeavour – yet how the challenges it faces today are leaving its most fundamental assumptions open to question. I argue that, given the growing influence of public and private global governance bodies on our daily lives and the shape of our political communities, the task of the law of global governance is no longer limited to ensuring the accountability of global bodies, but is also to protect human dignity and the very viability of the democratic state….(More)”.

Forcing People to Choose is Paternalistic


Cass R. Sunstein in Special Issue on Evaluating Nudging of the Missouri Law Journal: “It can be paternalistic to force people to choose. Often people do not wish to choose, but both private and public institutions ask or force them to do so, thus overriding their wishes. As a result, people’s autonomy may be badly compromised and their welfare may be greatly reduced. These points have implications for a range of issues in law and policy, suggesting that those who favor active choosing, and insist on it, may well be overriding people’s preferences and values, and thus running afoul of John Stuart Mill’s Harm Principle (for better or for worse). People have limited mental bandwidth, and forcing choices can impose a hedonic or cognitive tax. Sometimes that tax is high….(More)”.

Congress Is Broken. CrowdLaw Could Help Fix It.


Beth Noveck in Forbes: “The way Congress makes law is simply no longer viable. In David Schoenbrod’s recent book DC Confidential, he outlines “five tricks” politicians use to take credit in front of television cameras in order to further political party agendas while passing the blame and the buck to future generations for bad legislation. Although Congress makes the laws that govern all Americans, people also feel disenfranchised. One study concludes that “the preferences of the average American appear to have only a minuscule, near-zero, statistically non-significant impact upon public policy.” But technology offers the promise of improving both the quality and accountability of lawmaking by opening up the process to more and more diverse expertise and input from the public at every stage of the legislative process. We call such open and participatory lawmaking: “CrowdLaw.”

Moving Beyond the Ballot Box

Around the world, there are already over two dozen examples of local legislatures and national parliaments turning to the internet to improve the legitimacy and effectiveness of the laws they make; we need to do the same here if we are to begin to fix congressional dysfunction.

For example, Finland’s Citizen’s Initiative Act at the national level, like Madrid’s Decide initiative at the local level, allows any member of the public with the requisite signatures to propose new legislation, meaning that not only interest groups and politicians get to set the agenda for lawmaking.

In France, the Parlement & Citoyens platform allows the public to respond to a problem posed by a representative by contributing information about both causes and solutions. Relevant citizen input is then synthesized, debated, and incorporated into the resulting draft legislation. This brings greater empiricism into the legislative process through public contribution of expertise….(More)”.

They Are Watching You—and Everything Else on the Planet


Cover article by Robert Draper for Special Issue of the National Geographic: “Technology and our increasing demand for security have put us all under surveillance. Is privacy becoming just a memory?…

In 1949, amid the specter of European authoritarianism, the British novelist George Orwell published his dystopian masterpiece 1984, with its grim admonition: “Big Brother is watching you.” As unsettling as this notion may have been, “watching” was a quaintly circumscribed undertaking back then. That very year, 1949, an American company released the first commercially available CCTV system. Two years later, in 1951, Kodak introduced its Brownie portable movie camera to an awestruck public.

Today more than 2.5 trillion images are shared or stored on the Internet annually—to say nothing of the billions more photographs and videos people keep to themselves. By 2020, one telecommunications company estimates, 6.1 billion people will have phones with picture-taking capabilities. Meanwhile, in a single year an estimated 106 million new surveillance cameras are sold. More than three million ATMs around the planet stare back at their customers. Tens of thousands of cameras known as automatic number plate recognition devices, or ANPRs, hover over roadways—to catch speeding motorists or parking violators but also, in the case of the United Kingdom, to track the comings and goings of suspected criminals. The untallied but growing number of people wearing body cameras now includes not just police but also hospital workers and others who aren’t law enforcement officers. Proliferating as well are personal monitoring devices—dash cams, cyclist helmet cameras to record collisions, doorbells equipped with lenses to catch package thieves—that are fast becoming a part of many a city dweller’s everyday arsenal. Even less quantifiable, but far more vexing, are the billions of images of unsuspecting citizens captured by facial-recognition technology and stored in law enforcement and private-sector databases over which our control is practically nonexistent.

Those are merely the “watching” devices that we’re capable of seeing. Presently the skies are cluttered with drones—2.5 million of which were purchased in 2016 by American hobbyists and businesses. That figure doesn’t include the fleet of unmanned aerial vehicles used by the U.S. government not only to bomb terrorists in Yemen but also to help stop illegal immigrants entering from Mexico, monitor hurricane flooding in Texas, and catch cattle thieves in North Dakota. Nor does it include the many thousands of airborne spying devices employed by other countries—among them Russia, China, Iran, and North Korea.

We’re being watched from the heavens as well. More than 1,700 satellites monitor our planet. From a distance of about 300 miles, some of them can discern a herd of buffalo or the stages of a forest fire. From outer space, a camera clicks and a detailed image of the block where we work can be acquired by a total stranger….

This is—to lift the title from another British futurist, Aldous Huxley—our brave new world. That we can see it coming is cold comfort since, as Carnegie Mellon University professor of information technology Alessandro Acquisti says, “in the cat-and-mouse game of privacy protection, the data subject is always the weaker side of the game.” Simply submitting to the game is a dispiriting proposition. But to actively seek to protect one’s privacy can be even more demoralizing. University of Texas American studies professor Randolph Lewis writes in his new book, Under Surveillance: Being Watched in Modern America, “Surveillance is often exhausting to those who really feel its undertow: it overwhelms with its constant badgering, its omnipresent mysteries, its endless tabulations of movements, purchases, potentialities.”

The desire for privacy, Acquisti says, “is a universal trait among humans, across cultures and across time. You find evidence of it in ancient Rome, ancient Greece, in the Bible, in the Quran. What’s worrisome is that if all of us at an individual level suffer from the loss of privacy, society as a whole may realize its value only after we’ve lost it for good.”…(More)”.

‘Politics done like science’: Critical perspectives on psychological governance and the experimental state


Paper by  and  There has been a growing academic recognition of the increasing significance of psychologically – and behaviourally – informed modes of governance in recent years in a variety of different states. We contend that this academic research has neglected one important theme, namely the growing use of experiments as a way of developing and testing novel policies. Drawing on extensive qualitative and documentary research, this paper develops critical perspectives on the impacts of the psychological sciences on public policy, and considers more broadly the changing experimental form of modern states. The tendency for emerging forms of experimental governance to be predicated on very narrow, socially disempowering, visions of experimental knowledge production is critiqued. We delineate how psychological governance and emerging forms of experimental subjectivity have the potential to enable more empowering and progressive state forms and subjectivities to emerge through more open and collective forms of experimentation…(More)”.

The World’s Biggest Biometric Database Keeps Leaking People’s Data


Rohith Jyothish at FastCompany: “India’s national scheme holds the personal data of more than 1.13 billion citizens and residents of India within a unique ID system branded as Aadhaar, which means “foundation” in Hindi. But as more and more evidence reveals that the government is not keeping this information private, the actual foundation of the system appears shaky at best.

On January 4, 2018, The Tribune of India, a news outlet based out of Chandigarh, created a firestorm when it reported that people were selling access to Aadhaar data on WhatsApp, for alarmingly low prices….

The Aadhaar unique identification number ties together several pieces of a person’s demographic and biometric information, including their photograph, fingerprints, home address, and other personal information. This information is all stored in a centralized database, which is then made accessible to a long list of government agencies who can access that information in administrating public services.

Although centralizing this information could increase efficiency, it also creates a highly vulnerable situation in which one simple breach could result in millions of India’s residents’ data becoming exposed.

The Annual Report 2015-16 of the Ministry of Electronics and Information Technology speaks of a facility called DBT Seeding Data Viewer (DSDV) that “permits the departments/agencies to view the demographic details of Aadhaar holder.”

According to @databaazi, DSDV logins allowed third parties to access Aadhaar data (without UID holder’s consent) from a white-listed IP address. This meant that anyone with the right IP address could access the system.

This design flaw puts personal details of millions of Aadhaar holders at risk of broad exposure, in clear violation of the Aadhaar Act.…(More)”.