AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment


Alessandro Mantelero in Computer Law & Security Review: “The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values.

Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research….(More)”.

Long Term Info-structure


Long Now Foundation Seminar by Juan Benet: “We live in a spectacular time,”…”We’re a century into our computing phase transition. The latest stages have created astonishing powers for individuals, groups, and our species as a whole. We are also faced with accumulating dangers — the capabilities to end the whole humanity experiment are growing and are ever more accessible. In light of the promethean fire that is computing, we must prevent bad outcomes and lock in good ones to build robust foundations for our knowledge, and a safe future. There is much we can do in the short-term to secure the long-term.”

“I come from the front lines of computing platform design to share a number of new super-powers at our disposal, some old challenges that are now soluble, and some new open problems. In this next decade, we’ll need to leverage peer-to-peer networks, crypto-economics, blockchains, Open Source, Open Services, decentralization, incentive-structure engineering, and so much more to ensure short-term safety and the long-term flourishing of humanity.”

Juan Benet is the inventor of the InterPlanetary File System (IPFS)—a new protocol which uses content-addressing to make the web faster, safer, and more open—and the creator of Filecoin, a cryptocurrency-incentivized storage market….(More + Video)”

The Blockchain and the New Architecture of Trust


Book by Kevin Werbach: “The blockchain entered the world on January 3, 2009, introducing an innovative new trust architecture: an environment in which users trust a system—for example, a shared ledger of information—without necessarily trusting any of its components. The cryptocurrency Bitcoin is the most famous implementation of the blockchain, but hundreds of other companies have been founded and billions of dollars invested in similar applications since Bitcoin’s launch. Some see the blockchain as offering more opportunities for criminal behavior than benefits to society. In this book, Kevin Werbach shows how a technology resting on foundations of mutual mistrust can become trustworthy.

The blockchain, built on open software and decentralized foundations that allow anyone to participate, seems like a threat to any form of regulation. In fact, Werbach argues, law and the blockchain need each other. Blockchain systems that ignore law and governance are likely to fail, or to become outlaw technologies irrelevant to the mainstream economy. That, Werbach cautions, would be a tragic waste of potential. If, however, we recognize the blockchain as a kind of legal technology, which shapes behavior in new ways, it can be harnessed to create tremendous business and social value….(More)”.

Remembering and Forgetting in the Digital Age


Book by Thouvenin, Florent (et al.): “… examines the fundamental question of how legislators and other rule-makers should handle remembering and forgetting information (especially personally identifiable information) in the digital age. It encompasses such topics as privacy, data protection, individual and collective memory, and the right to be forgotten when considering data storage, processing and deletion. The authors argue in support of maintaining the new digital default, that (personally identifiable) information should be remembered rather than forgotten.

The book offers guidelines for legislators as well as private and public organizations on how to make decisions on remembering and forgetting personally identifiable information in the digital age. It draws on three main perspectives: law, based on a comprehensive analysis of Swiss law that serves as an example; technology, specifically search engines, internet archives, social media and the mobile internet; and an interdisciplinary perspective with contributions from various disciplines such as philosophy, anthropology, sociology, psychology, and economics, amongst others.. Thanks to this multifaceted approach, readers will benefit from a holistic view of the informational phenomenon of “remembering and forgetting”.

This book will appeal to lawyers, philosophers, sociologists, historians, economists, anthropologists, and psychologists among many others. Such wide appeal is due to its rich and interdisciplinary approach to the challenges for individuals and society at large with regard to remembering and forgetting in the digital age…(More)”

The Smart Transition: An Opportunity for a Sensor-Based Public-Health Risk Governance?


Anna Berti Suman in the International Review of Law, Computers & Technology: “This contribution analyses the promises and challenges of using bottom-up produced sensors data to manage public-health risks in the (smart) city. The article criticizes traditional ways of governing public-health risks with the aim to inspect the contribution that a sensor-based risk governance may bring to the fore. The failures of the top-down model serve to illustrate that the smart transformation of the city’s living environments may stimulate a better public-health risk governance and a new city’s utopia.

The central question this contribution addresses is: How could the potential of a city’s network of sensors and of datainfrastructures contribute to smartly realizing healthier cities, free from environmental risk? The central aim of the article is to reflect on the opportunity to combine top-down and bottom-up sensing approaches. In view of this aim, the complementary potential of top and bottom sensing is inspected. Citizen sensing practices are discussed as manifestation of the new public sphere and a taxonomy for a sensor-based risk governance is developed. The challenges hidden behind this arguably inclusive transition are dismantled….(More)”.

When Westlaw Fuels Ice Surveillance: Ethics in the Big Data Policing Era


Sarah Lamdan at New York University Review of Law & Social Change: “Legal research companies are selling surveillance data and services to U.S. Immigration and Customs Enforcement (ICE) and other law enforcement agencies.

This article discusses ethical issues that arise when lawyers buy and use legal research services sold by the vendors that build ICE’s surveillance systems. As the legal profession collectively pays millions of dollars for computer assisted legal research services, lawyers should consider whether doing so in the era of big data policing compromises their confidentiality requirements and their obligation to supervise third party vendors….(More)”

What is mechanistic evidence, and why do we need it for evidence-based policy?


Paper by Caterina Marchionni and Samuli Reijula: “It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so. Although agreeing with the spirit of this proposal, we argue that the underlying conception of mechanistic evidence as evidence that is different in kind from correlational, difference-making or statistical evidence, does not correctly capture the role that information about mechanisms should play in evidence-based policy. We offer an alternative account of mechanistic evidence as information concerning the causal pathway connecting the policy intervention to its outcome. Not only can this be analyzed as evidence of difference-making, it is also to be found at any level and is obtainable by a broad range of methods, both experimental and observational. Using behavioral policy as an illustration, we draw the implications of this revised understanding of mechanistic evidence for debates concerning policy extrapolation, evidence hierarchies, and evidence integration…(More)”.

Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable


Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.

Data-Driven Law: Data Analytics and the New Legal Services


Book by Edward J. Walters: “For increasingly data-savvy clients, lawyers can no longer give “it depends” answers rooted in anecdata. Clients insist that their lawyers justify their reasoning, and with more than a limited set of war stories. The considered judgment of an experienced lawyer is unquestionably valuable. However, on balance, clients would rather have the considered judgment of an experienced lawyer informed by the most relevant information required to answer their questions.

Data-Driven Law: Data Analytics and the New Legal Services helps legal professionals meet the challenges posed by a data-driven approach to delivering legal services. Its chapters are written by leading experts who cover such topics as:

  • Mining legal data
  • Computational law
  • Uncovering bias through the use of Big Data
  • Quantifying the quality of legal services
  • Data mining and decision-making
  • Contract analytics and contract standards

In addition to providing clients with data-based insight, legal firms can track a matter with data from beginning to end, from the marketing spend through to the type of matter, hours spent, billed, and collected, including metrics on profitability and success. Firms can organize and collect documents after a matter and even automate them for reuse. Data on marketing related to a matter can be an amazing source of insight about which practice areas are most profitable….(More)”.

How Taiwan’s online democracy may show future of humans and machines


Shuyang Lin at the Sydney Morning Herald: “Taiwanese citizens have spent the past 30 years prototyping future democracy since the lift of martial law in 1987. Public participation in Taiwan has been developed in several formats, from face-to-face to deliberation over the internet. This trajectory coincides with the advancement of technology, and as new tools arrived, democracy evolved.

The launch of vTaiwan (v for virtual, vote, voice and verb), an experiment that prototypes an open consultation process for the civil society, showed that by using technology creatively humanity can facilitate deep and fair conversations, form collective consensus, and deliver solutions we can all live with.

It is a prototype that helps us envision what future democracy could look like….

Decision-making is not an easy task, especially when it has to do with a larger group of people. Group decision-making could take several protocols, such as mandate, to decide and take questions; advise, to listen before decisions; consent, to decide if no one objects; and consensus, to decide if everyone agrees. So there is a pressing need for us to be able to collaborate together in a large scale decision-making process to update outdated standards and regulations.

The future of human knowledge is on the web. Technology can help us to learn, communicate, and make better decisions faster with larger scale. The internet could be the facilitation and AI could be the catalyst. It is extremely important to be aware that decision-making is not a one-off interaction. The most important direction of decision-making technology development is to have it allow humans to be engaged in the process anytime and also have an invitation to request and submit changes.

Humans have started working with computers, and we will continue to work with them. They will help us in the decision-making process and some will even make decisions for us; the actors in collaboration don’t necessarily need to be just humans. While it is up to us to decide what and when to opt in or opt out, we should work together with computers in a transparent, collaborative and inclusive space.

Where shall we go as a society? What do we want from technology? As Audrey Tang,  Digital Minister without Portfolio of Taiwan, puts it: “Deliberation — listening to each other deeply, thinking together and working out something that we can all live with — is magical.”…(More)”.