Upholding Democracy Amid the Challenges of New Technology


Paper by Eyal Benvenisti at the The European Journal of International Law: “The law on global governance that emerged after the Second World War was grounded in irrefutable trust in international organizations and an assumption that their subjection to legal discipline and judicial review would be unnecessary and, in fact, detrimental to their success. The law that evolved systematically insulated international organizations from internal and external scrutiny and absolved them of any inherent legal obligations – and, to a degree, continues to do so.

Indeed, it was only well after the end of the Cold War that mistrust in global governance began to trickle through into the legal discourse and the realization gradually took hold that the operation of international organizations needed to be subject to the disciplining power of the law. Since the mid-1990s, scholars have sought to identify the conditions under which trust in global bodies can be regained, mainly by borrowing and adapting domestic public law precepts that emphasize accountability through communications with those affected.

Today, although a ‘culture of accountability’ may have taken root, its legal tools are still shaping up and are often contested. More importantly, these communicative tools are ill-equipped to address the new modalities of governance that are based on decision-making by machines using raw data (rather than two-way exchange with stakeholders) as their input.

The new information and communication technologies challenge the foundational premise of the accountability school – that ‘the more communication, the better’ – as voters-turned-users obtain their information from increasingly fragmented and privatized marketplaces of ideas that are manipulated for economic and political gain.

In this article, I describe and analyse how the law has evolved to acknowledge the need for accountability, how it has designed norms for this purpose and continues in this endeavour – yet how the challenges it faces today are leaving its most fundamental assumptions open to question. I argue that, given the growing influence of public and private global governance bodies on our daily lives and the shape of our political communities, the task of the law of global governance is no longer limited to ensuring the accountability of global bodies, but is also to protect human dignity and the very viability of the democratic state….(More)”.

Forcing People to Choose is Paternalistic


Cass R. Sunstein in Special Issue on Evaluating Nudging of the Missouri Law Journal: “It can be paternalistic to force people to choose. Often people do not wish to choose, but both private and public institutions ask or force them to do so, thus overriding their wishes. As a result, people’s autonomy may be badly compromised and their welfare may be greatly reduced. These points have implications for a range of issues in law and policy, suggesting that those who favor active choosing, and insist on it, may well be overriding people’s preferences and values, and thus running afoul of John Stuart Mill’s Harm Principle (for better or for worse). People have limited mental bandwidth, and forcing choices can impose a hedonic or cognitive tax. Sometimes that tax is high….(More)”.

Congress Is Broken. CrowdLaw Could Help Fix It.


Beth Noveck in Forbes: “The way Congress makes law is simply no longer viable. In David Schoenbrod’s recent book DC Confidential, he outlines “five tricks” politicians use to take credit in front of television cameras in order to further political party agendas while passing the blame and the buck to future generations for bad legislation. Although Congress makes the laws that govern all Americans, people also feel disenfranchised. One study concludes that “the preferences of the average American appear to have only a minuscule, near-zero, statistically non-significant impact upon public policy.” But technology offers the promise of improving both the quality and accountability of lawmaking by opening up the process to more and more diverse expertise and input from the public at every stage of the legislative process. We call such open and participatory lawmaking: “CrowdLaw.”

Moving Beyond the Ballot Box

Around the world, there are already over two dozen examples of local legislatures and national parliaments turning to the internet to improve the legitimacy and effectiveness of the laws they make; we need to do the same here if we are to begin to fix congressional dysfunction.

For example, Finland’s Citizen’s Initiative Act at the national level, like Madrid’s Decide initiative at the local level, allows any member of the public with the requisite signatures to propose new legislation, meaning that not only interest groups and politicians get to set the agenda for lawmaking.

In France, the Parlement & Citoyens platform allows the public to respond to a problem posed by a representative by contributing information about both causes and solutions. Relevant citizen input is then synthesized, debated, and incorporated into the resulting draft legislation. This brings greater empiricism into the legislative process through public contribution of expertise….(More)”.

They Are Watching You—and Everything Else on the Planet


Cover article by Robert Draper for Special Issue of the National Geographic: “Technology and our increasing demand for security have put us all under surveillance. Is privacy becoming just a memory?…

In 1949, amid the specter of European authoritarianism, the British novelist George Orwell published his dystopian masterpiece 1984, with its grim admonition: “Big Brother is watching you.” As unsettling as this notion may have been, “watching” was a quaintly circumscribed undertaking back then. That very year, 1949, an American company released the first commercially available CCTV system. Two years later, in 1951, Kodak introduced its Brownie portable movie camera to an awestruck public.

Today more than 2.5 trillion images are shared or stored on the Internet annually—to say nothing of the billions more photographs and videos people keep to themselves. By 2020, one telecommunications company estimates, 6.1 billion people will have phones with picture-taking capabilities. Meanwhile, in a single year an estimated 106 million new surveillance cameras are sold. More than three million ATMs around the planet stare back at their customers. Tens of thousands of cameras known as automatic number plate recognition devices, or ANPRs, hover over roadways—to catch speeding motorists or parking violators but also, in the case of the United Kingdom, to track the comings and goings of suspected criminals. The untallied but growing number of people wearing body cameras now includes not just police but also hospital workers and others who aren’t law enforcement officers. Proliferating as well are personal monitoring devices—dash cams, cyclist helmet cameras to record collisions, doorbells equipped with lenses to catch package thieves—that are fast becoming a part of many a city dweller’s everyday arsenal. Even less quantifiable, but far more vexing, are the billions of images of unsuspecting citizens captured by facial-recognition technology and stored in law enforcement and private-sector databases over which our control is practically nonexistent.

Those are merely the “watching” devices that we’re capable of seeing. Presently the skies are cluttered with drones—2.5 million of which were purchased in 2016 by American hobbyists and businesses. That figure doesn’t include the fleet of unmanned aerial vehicles used by the U.S. government not only to bomb terrorists in Yemen but also to help stop illegal immigrants entering from Mexico, monitor hurricane flooding in Texas, and catch cattle thieves in North Dakota. Nor does it include the many thousands of airborne spying devices employed by other countries—among them Russia, China, Iran, and North Korea.

We’re being watched from the heavens as well. More than 1,700 satellites monitor our planet. From a distance of about 300 miles, some of them can discern a herd of buffalo or the stages of a forest fire. From outer space, a camera clicks and a detailed image of the block where we work can be acquired by a total stranger….

This is—to lift the title from another British futurist, Aldous Huxley—our brave new world. That we can see it coming is cold comfort since, as Carnegie Mellon University professor of information technology Alessandro Acquisti says, “in the cat-and-mouse game of privacy protection, the data subject is always the weaker side of the game.” Simply submitting to the game is a dispiriting proposition. But to actively seek to protect one’s privacy can be even more demoralizing. University of Texas American studies professor Randolph Lewis writes in his new book, Under Surveillance: Being Watched in Modern America, “Surveillance is often exhausting to those who really feel its undertow: it overwhelms with its constant badgering, its omnipresent mysteries, its endless tabulations of movements, purchases, potentialities.”

The desire for privacy, Acquisti says, “is a universal trait among humans, across cultures and across time. You find evidence of it in ancient Rome, ancient Greece, in the Bible, in the Quran. What’s worrisome is that if all of us at an individual level suffer from the loss of privacy, society as a whole may realize its value only after we’ve lost it for good.”…(More)”.

‘Politics done like science’: Critical perspectives on psychological governance and the experimental state


Paper by  and  There has been a growing academic recognition of the increasing significance of psychologically – and behaviourally – informed modes of governance in recent years in a variety of different states. We contend that this academic research has neglected one important theme, namely the growing use of experiments as a way of developing and testing novel policies. Drawing on extensive qualitative and documentary research, this paper develops critical perspectives on the impacts of the psychological sciences on public policy, and considers more broadly the changing experimental form of modern states. The tendency for emerging forms of experimental governance to be predicated on very narrow, socially disempowering, visions of experimental knowledge production is critiqued. We delineate how psychological governance and emerging forms of experimental subjectivity have the potential to enable more empowering and progressive state forms and subjectivities to emerge through more open and collective forms of experimentation…(More)”.

The World’s Biggest Biometric Database Keeps Leaking People’s Data


Rohith Jyothish at FastCompany: “India’s national scheme holds the personal data of more than 1.13 billion citizens and residents of India within a unique ID system branded as Aadhaar, which means “foundation” in Hindi. But as more and more evidence reveals that the government is not keeping this information private, the actual foundation of the system appears shaky at best.

On January 4, 2018, The Tribune of India, a news outlet based out of Chandigarh, created a firestorm when it reported that people were selling access to Aadhaar data on WhatsApp, for alarmingly low prices….

The Aadhaar unique identification number ties together several pieces of a person’s demographic and biometric information, including their photograph, fingerprints, home address, and other personal information. This information is all stored in a centralized database, which is then made accessible to a long list of government agencies who can access that information in administrating public services.

Although centralizing this information could increase efficiency, it also creates a highly vulnerable situation in which one simple breach could result in millions of India’s residents’ data becoming exposed.

The Annual Report 2015-16 of the Ministry of Electronics and Information Technology speaks of a facility called DBT Seeding Data Viewer (DSDV) that “permits the departments/agencies to view the demographic details of Aadhaar holder.”

According to @databaazi, DSDV logins allowed third parties to access Aadhaar data (without UID holder’s consent) from a white-listed IP address. This meant that anyone with the right IP address could access the system.

This design flaw puts personal details of millions of Aadhaar holders at risk of broad exposure, in clear violation of the Aadhaar Act.…(More)”.

Artificial intelligence and smart cities


Essay by Michael Batty at Urban Analytics and City Sciences: “…The notion of the smart city of course conjures up these images of such an automated future. Much of our thinking about this future, certainly in the more popular press, is about everything ranging from the latest App on our smart phones to driverless cars while somewhat deeper concerns are about efficiency gains due to the automation of services ranging from transit to the delivery of energy. There is no doubt that routine and repetitive processes – algorithms if you like – are improving at an exponential rate in terms of the data they can process and the speed of execution, faithfully following Moore’s Law.

Pattern recognition techniques that lie at the basis of machine learning are highly routinized iterative schemes where the pattern in question – be it a signature, a face, the environment around a driverless car and so on – is computed as an elaborate averaging procedure which takes a series of elements of the pattern and weights them in such a way that the pattern can be reproduced perfectly by the combinations of elements of the original pattern and the weights. This is in essence the way neural networks work. When one says that they ‘learn’ and that the current focus is on ‘deep learning’, all that is meant is that with complex patterns and environments, many layers of neurons (elements of the pattern) are defined and the iterative procedures are run until there is a convergence with the pattern that is to be explained. Such processes are iterative, additive and not much more than sophisticated averaging but using machines that can operate virtually at the speed of light and thus process vast volumes of big data. When these kinds of algorithm can be run in real time and many already can be, then there is the prospect of many kinds of routine behaviour being displaced. It is in this sense that AI might herald in an era of truly disruptive processes. This according to Brynjolfsson and McAfee is beginning to happen as we reach the second half of the chess board.

The real issue in terms of AI involves problems that are peculiarly human. Much of our work is highly routinized and many of our daily actions and decisions are based on relatively straightforward patterns of stimulus and response. The big questions involve the extent to which those of our behaviours which are not straightforward can be automated. In fact, although machines are able to beat human players in many board games and there is now the prospect of machines beating the very machines that were originally designed to play against humans, the real power of AI may well come from collaboratives of man and machine, working together, rather than ever more powerful machines working by themselves. In the last 10 years, some of my editorials have tracked what is happening in the real-time city – the smart city as it is popularly called – which has become key to many new initiatives in cities. In fact, cities – particularly big cities, world cities – have become the flavour of the month but the focus has not been on their long-term evolution but on how we use them on a minute by minute to week by week basis.

Many of the patterns that define the smart city on these short-term cycles can be predicted using AI largely because they are highly routinized but even for highly routine patterns, there are limits on the extent to which we can explain them and reproduce them. Much advancement in AI within the smart city will come from automation of the routine, such as the use of energy, the delivery of location-based services, transit using information being fed to operators and travellers in real time and so on. I think we will see some quite impressive advances in these areas in the next decade and beyond. But the key issue in urban planning is not just this short term but the long term and it is here that the prospects for AI are more problematic….(More)”.

Can Big Data Revolutionize International Human Rights Law?


Galit A. Sarfaty in the Journal of International Law: “International human rights efforts have been overly reliant on reactive tools and focused on treaty compliance, while often underemphasizing the prevention of human rights violations. I argue that data analytics can play an important role in refocusing the international human rights regime on its original goal of preventing human rights abuses, but it comes at a cost.

There are risks in advancing a data-driven approach to human rights, including the privileging of certain rights subject to quantitative measurement and the precipitation of further human rights abuses in the process of preventing other violations. Moreover, the increasing use of big data can ultimately privatize the international human rights regime by transforming the corporation into a primary gatekeeper of rights protection. Such unintended consequences need to be addressed in order to maximize the benefits and minimize the risks of using big data in this field….(More)”.

Data-Intensive Approaches To Creating Innovation For Sustainable Smart Cities


Science Trends: “Located at the complex intersection of economic development and environmental change, cities play a central role in our efforts to move towards sustainability. Reducing air and water pollution, improving energy efficiency while securing energy supply, and minimizing vulnerabilities to disruptions and disturbances are interconnected and pose a formidable challenge, with their dynamic interactions changing in highly complex and unpredictable manners….

The Beijing City Lab demonstrates the usefulness of open urban data in mapping urbanization with a fine spatiotemporal scale and reflecting social and environmental dimensions of urbanization through visualization at multiple scales.

The basic principle of open data will generate significant opportunities for promoting inter-disciplinary and inter-organizational research, producing new data sets through the integration of different sources, avoiding duplication of research, facilitating the verification of previous results, and encouraging citizen scientists and crowdsourcing approaches. Open data also is expected to help governments promote transparency, citizen participation, and access to information in policy-making processes.

Despite a significant potential, however, there still remain numerous challenges in facilitating innovation for urban sustainability through open data. The scope and amount of data collected and shared are still limited, and the quality control, error monitoring, and cleaning of open data is also indispensable in securing the reliability of the analysis. Also, the organizational and legal frameworks of data sharing platforms are often not well-defined or established, and it is critical to address the interoperability between various data standards, balance between open and proprietary data, and normative and legal issues such as the data ownership, personal privacy, confidentiality, law enforcement, and the maintenance of public safety and national security….

These findings are described in the article entitled Facilitating data-intensive approaches to innovation for sustainability: opportunities and challenges in building smart cities, published in the journal Sustainability Science. This work was led by Masaru Yarime from the City University of Hong Kong….(More)”.

Government data: How open is too open?


Sharon Fisher at HPE: “The notion of “open government” appeals to both citizens and IT professionals seeking access to freely available government data. But is there such a thing as data access being too open? Governments may want to be transparent, yet they need to avoid releasing personally identifiable information.

There’s no question that open government data offers many benefits. It gives citizens access to the data their taxes paid for, enables government oversight, and powers the applications developed by government, vendors, and citizens that improve people’s lives.

However, data breaches and concerns about the amount of data that government is collecting makes some people wonder: When is it too much?

“As we think through the big questions about what kind of data a state should collect, how it should use it, and how to disclose it, these nuances become not some theoretical issue but a matter of life and death to some people,” says Alexander Howard, deputy director of the Sunlight Foundation, a Washington nonprofit that advocates for open government. “There are people in government databases where the disclosure of their [physical] location is the difference between a life-changing day and Wednesday.

Open data supporters point out that much of this data has been considered a public record all along and tout the value of its use in analytics. But having personal data aggregated in a single place that is accessible online—as opposed to, say, having to go to an office and physically look up each record—makes some people uneasy.

Privacy breaches, wholesale

“We’ve seen a real change in how people perceive privacy,” says Michael Morisy, executive director at MuckRock, a Cambridge, Massachusetts, nonprofit that helps media and citizens file public records requests. “It’s been driven by a long-standing concept in transparency: practical obscurity.” Even if something was technically a public record, effort needed to be expended to get one’s hands on it. That amount of work might be worth it about, say, someone running for office, but on the whole, private citizens didn’t have to worry. Things are different now, says Morisy. “With Google, and so much data being available at the click of a mouse or the tap of a phone, what was once practically obscure is now instantly available.”

People are sometimes also surprised to find out that public records can contain their personally identifiable information (PII), such as addresses, phone numbers, and even Social Security numbers. That may be on purpose or because someone failed to redact the data properly.

That’s had consequences. Over the years, there have been a number of incidents in which PII from public records, including addresses, was used to harass and sometimes even kill people. For example, in 1989, Rebecca Schaeffer was murdered by a stalker who learned her address from the Department of Motor Vehicles. Other examples of harassment via driver’s license numbers include thieves who tracked down the address of owners of expensive cars and activists who sent anti-abortion literature to women who had visited health clinics that performed abortions.

In response, in 1994, Congress enacted the Driver’s Privacy Protection Act to restrict the sale of such data. More recently, the state of Idaho passed a law protecting the identity of hunters who shot wolves, because the hunters were being harassed by wolf supporters. Similarly, the state of New York allowed concealed pistol permit holders to make their name and address private after a newspaper published an online interactive map showing the names and addresses of all handgun permit holders in Westchester and Rockland counties….(More)”.