Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence


Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.

Where’s the evidence? Obstacles to impact-gathering and how researchers might be better supported in future


Clare Wilkinson at the LSE Impact Blog: “…In a recent case study I explore how researchers from a broad range of research areas think about evidencing impact, what obstacles to impact-gathering might stand in their way, and how they might be further supported in future.

Unsurprisingly the research found myriad potential barriers to gathering research impact, such as uncertainty over how impact is defined, captured, judged, and weighted, or the challenges for researchers in tracing impact back to a specific time-period or individual piece of research. Many of these constraints have been recognised in previous research in this area – or were anticipated when impact was first discussed – but talking to researchers in 2015 about their impact experiences of the REF 2014 data-gathering period revealed a number of lingering concerns.

A further hazard identified by the case study is the inequalities in knowledge around research impact and how this knowledge often exists in siloes. Those researchers most likely to have obvious impact-generating activities were developing quite detailed and extensive experience of impact-capturing; while other researchers (including those at early-career stages) were less clear on the impact agenda’s relevance to them or even whether their research had featured in an impact case study. Encouragingly some researchers did seem to increase in confidence once having experience of authoring an impact case study, but sharing skills and confidence with the “next generation” of researchers likely to have impact remains a possible issue for those supporting impact evidence-gathering.

So, how can researchers, across the board, be supported to effectively evidence their impact? Most popular amongst the options given to the 70 or so researchers that participated in this case study were: 1) approaches that offered them more time or funding to gather evidence; 2) opportunities to see best-practice examples; 3) opportunities to learn more about what “impact” means; and 4) the sharing of information on the types of evidence that could be collected….(More)”.

Decoding the Social World: Data Science and the Unintended Consequences of Communication


Book by Sandra González-Bailón: “Social life is full of paradoxes. Our intentional actions often trigger outcomes that we did not intend or even envision. How do we explain those unintended effects and what can we do to regulate them? In Decoding the Social World, Sandra González-Bailón explains how data science and digital traces help us solve the puzzle of unintended consequences—offering the solution to a social paradox that has intrigued thinkers for centuries. Communication has always been the force that makes a collection of people more than the sum of individuals, but only now can we explain why: digital technologies have made it possible to parse the information we generate by being social in new, imaginative ways. And yet we must look at that data, González-Bailón argues, through the lens of theories that capture the nature of social life. The technologies we use, in the end, are also a manifestation of the social world we inhabit.

González-Bailón discusses how the unpredictability of social life relates to communication networks, social influence, and the unintended effects that derive from individual decisions. She describes how communication generates social dynamics in aggregate (leading to episodes of “collective effervescence”) and discusses the mechanisms that underlie large-scale diffusion, when information and behavior spread “like wildfire.” She applies the theory of networks to illuminate why collective outcomes can differ drastically even when they arise from the same individual actions. By opening the black box of unintended effects, González-Bailón identifies strategies for social intervention and discusses the policy implications—and how data science and evidence-based research embolden critical thinking in a world that is constantly changing….(More)”.

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem


Paper by Amanda Levendowski: “As the use of artificial intelligence (AI) continues to spread, we have seen an increase in examples of AI systems reflecting or exacerbating societal bias, from racist facial recognition to sexist natural language processing. These biases threaten to overshadow AI’s technological gains and potential benefits. While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often-homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright.

Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law — the fair use doctrine — has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems….(More)”.

On the cultural ideology of Big Data


Nathan Jurgenson in The New Inquiry: “Modernity has long been obsessed with, perhaps even defined by, its epistemic insecurity, its grasping toward big truths that ultimately disappoint as our world grows only less knowable. New knowledge and new ways of understanding simultaneously produce new forms of nonknowledge, new uncertainties and mysteries. The scientific method, based in deduction and falsifiability, is better at proliferating questions than it is at answering them. For instance, Einstein’s theories about the curvature of space and motion at the quantum level provide new knowledge and generates new unknowns that previously could not be pondered.

Since every theory destabilizes as much as it solidifies in our view of the world, the collective frenzy to generate knowledge creates at the same time a mounting sense of futility, a tension looking for catharsis — a moment in which we could feel, if only for an instant, that we know something for sure. In contemporary culture, Big Data promises this relief.

As the name suggests, Big Data is about size. Many proponents of Big Data claim that massive databases can reveal a whole new set of truths because of the unprecedented quantity of information they contain. But the big in Big Data is also used to denote a qualitative difference — that aggregating a certain amount of information makes data pass over into Big Data, a “revolution in knowledge,” to use a phrase thrown around by startups and mass-market social-science books. Operating beyond normal science’s simple accumulation of more information, Big Data is touted as a different sort of knowledge altogether, an Enlightenment for social life reckoned at the scale of masses.

As with the similarly inferential sciences like evolutionary psychology and pop-neuroscience, Big Data can be used to give any chosen hypothesis a veneer of science and the unearned authority of numbers. The data is big enough to entertain any story. Big Data has thus spawned an entire industry (“predictive analytics”) as well as reams of academic, corporate, and governmental research; it has also sparked the rise of “data journalism” like that of FiveThirtyEight, Vox, and the other multiplying explainer sites. It has shifted the center of gravity in these fields not merely because of its grand epistemological claims but also because it’s well-financed. Twitter, for example recently announced that it is putting $10 million into a “social machines” Big Data laboratory.

The rationalist fantasy that enough data can be collected with the “right” methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular. The term comes from Positive Philosophy (1830-1842), by August Comte, who also coined the term sociology in this image. As Western sociology began to congeal as a discipline (departments, paid jobs, journals, conferences), Emile Durkheim, another of the field’s founders, believed it could function as a “social physics” capable of outlining “social facts” akin to the measurable facts that could be recorded about the physical properties of objects. It’s an arrogant view, in retrospect — one that aims for a grand, general theory that can explain social life, a view that became increasingly rooted as sociology became focused on empirical data collection.

A century later, that unwieldy aspiration has been largely abandoned by sociologists in favor of reorienting the discipline toward recognizing complexities rather than pursuing universal explanations for human sociality. But the advent of Big Data has resurrected the fantasy of a social physics, promising a new data-driven technique for ratifying social facts with sheer algorithmic processing power…(More)”

Policy Analytics, Modelling, and Informatics


Book edited by J. Ramon Gil-Garcia, Theresa A. Pardo and Luis F. Luna-Reyes: “This book provides a comprehensive approach to the study of policy analytics, modelling and informatics. It includes theories and concepts for understanding tools and techniques used by governments seeking to improve decision making through the use of technology, data, modelling, and other analytics, and provides relevant case studies and practical recommendations. Governments around the world face policy issues that require strategies and solutions using new technologies, new access to data and new analytical tools and techniques such as computer simulation, geographic information systems, and social network analysis for the successful implementation of public policy and government programs. Chapters include cases, concepts, methodologies, theories, experiences, and practical recommendations on data analytics and modelling for public policy and practice, and addresses a diversity of data tools, applied to different policy stages in several contexts, and levels and branches of government. This book will be of interest of researchers, students, and practitioners in e-government, public policy, public administration, policy analytics and policy informatics….(More)”.

Handbook on Political Trust


Book edited by Sonja Zmerli and Tom W.G. van der Meer: “Political trust – of citizens in government, parliament or political parties – has been centre stage in political science for more than half a century, reflecting ongoing concerns about the legitimacy of representative democracy. This Handbook offers the first truly global perspective on political trust and integrates the conceptual, theoretical, methodological, and empirical state of the art.

An impressive, international body of expert scholars explore established and new venues of research, by taking stock of levels, trends, explanations and implications of political trust, and relating them to regional particularities across the globe. Along with a wealth of genuine empirical analyses, this Handbook also features the latest developments in personality, cognitive and emotional research and discusses, not only the relevance, but also the ‘dark side’ of political trust….(More)”.

Selected Readings on Blockchain and Identity


By Hannah Pierce and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of blockchain and identity was originally published in 2017.

The potential of blockchain and other distributed ledger technologies to create positive social change has inspired enthusiasm, broad experimentation, and some skepticism. In this edition of the Selected Readings series, we explore and curate the literature on blockchain and how it impacts identity as a means to access services and rights. (In a previous edition we considered the Potential of Blockchain for Transforming Governance).

Introduction

In 2008, an unknown source calling itself Satoshi Nakamoto released a paper named Bitcoin: A Peer-to-Peer Electronic Cash System which introduced Blockchain. Blockchain is a novel technology that uses a distributed ledger to record transactions and ensure compliance. Blockchain and other Distributed Ledger technologies (DLTs) rely on an ability to act as a vast, transparent, and secure public database.

Distributed ledger technologies (DLTs) have disruptive potential beyond innovation in products, services, revenue streams and operating systems within industry. By providing transparency and accountability in new and distributed ways, DLTs have the potential to positively empower underserved populations in myriad ways, including providing a means for establishing a trusted digital identity.

Consider the potential of DLTs for 2.4 billion people worldwide, about 1.5 billion of whom are over the age of 14, who are unable to prove identity to the satisfaction of authorities and other organizations – often excluding them from property ownership, free movement, and social protection as a result. At the same time, transition to a DLT led system of ID management involves various risks, that if not understood and mitigated properly, could harm potential beneficiaries.

Annotated Selected Reading List

Governance

Cuomo, Jerry, Richard Nash, Veena Pureswaran, Alan Thurlow, Dave Zaharchuk. “Building trust in government: Exploring the potential of blockchains.” IBM Institute for Business Value. January 2017.

This paper from the IBM Institute for Business Value culls findings from surveys conducted with over 200 government leaders in 16 countries regarding their experiences and expectations for blockchain technology. The report also identifies “Trailblazers”, or governments that expect to have blockchain technology in place by the end of the year, and details the views and approaches that these early adopters are taking to ensure the success of blockchain in governance. These Trailblazers also believe that there will be high yields from utilizing blockchain in identity management and that citizen services, such as voting, tax collection and land registration, will become increasingly dependent upon decentralized and secure identity management systems. Additionally, some of the Trailblazers are exploring blockchain application in borderless services, like cross-province or state tax collection, because the technology removes the need for intermediaries like notaries or lawyers to verify identities and the authenticity of transactions.

Mattila, Juri. “The Blockchain Phenomenon: The Disruptive Potential of Distributed Consensus Architectures.” Berkeley Roundtable on the International Economy. May 2016.

This working paper gives a clear introduction to blockchain terminology, architecture, challenges, applications (including use cases), and implications for digital trust, disintermediation, democratizing the supply chain, an automated economy, and the reconfiguration of regulatory capacity. As far as identification management is concerned, Mattila argues that blockchain can remove the need to go through a trusted third party (such as a bank) to verify identity online. This could strengthen the security of personal data, as the move from a centralized intermediary to a decentralized network lowers the risk of a mass data security breach. In addition, using blockchain technology for identity verification allows for a more standardized documentation of identity which can be used across platforms and services. In light of these potential capabilities, Mattila addresses the disruptive power of blockchain technology on intermediary businesses and regulating bodies.

Identity Management Applications

Allen, Christopher.  “The Path to Self-Sovereign Identity.” Coindesk. April 27, 2016.

In this Coindesk article, author Christopher Allen lays out the history of digital identities, then explains a concept of a “self-sovereign” identity, where trust is enabled without compromising individual privacy. His ten principles for self-sovereign identity (Existence, Control, Access, Transparency, Persistence, Portability, Interoperability, Consent, Minimization, and Protection) lend themselves to blockchain technology for administration. Although there are actors making moves toward the establishment of self-sovereign identity, there are a few challenges that face the widespread implementation of these tenets, including legal risks, confidentiality issues, immature technology, and a reluctance to change established processes.

Jacobovitz, Ori. “Blockchain for Identity Management.” Department of Computer Science, Ben-Gurion University. December 11, 2016.

This technical report discusses advantages of blockchain technology in managing and authenticating identities online, such as the ability for individuals to create and manage their own online identities, which offers greater control over access to personal data. Using blockchain for identity verification can also afford the potential of “digital watermarks” that could be assigned to each of an individual’s transactions, as well as negating the creation of unique usernames and passwords online. After arguing that this decentralized model will allow individuals to manage data on their own terms, Jacobvitz provides a list of companies, projects, and movements that are using blockchain for identity management.

Mainelli, Michael. “Blockchain Will Help Us Prove Our Identities in a Digital World.” Harvard Business Review. March 16, 2017.

In this Harvard Business Review article, author Michael Mainelli highlights a solution to identity problems for rich and poor alike–mutual distributed ledgers (MDLs), or blockchain technology. These multi-organizational data bases with unalterable ledgers and a “super audit trail” have three parties that deal with digital document exchanges: subjects are individuals or assets, certifiers are are organizations that verify identity, and inquisitors are entities that conducts know-your-customer (KYC) checks on the subject. This system will allow for a low-cost, secure, and global method of proving identity. After outlining some of the other benefits that this technology may have in creating secure and easily auditable digital documents, such as greater tolerance that comes from viewing widely public ledgers, Mainelli questions if these capabilities will turn out to be a boon or a burden to bureaucracy and societal behavior.

Personal Data Security Applications

Banafa, Ahmed. “How to Secure the Internet of Things (IoT) with Blockchain.” Datafloq. August 15, 2016.

This article details the data security risks that are coming up as the Internet of Things continues to expand, and how using blockchain technology can protect the personal data and identity information that is exchanged between devices. Banafa argues that, as the creation and collection of data is central to the functions of Internet of Things devices, there is an increasing need to better secure data that largely confidential and often personally identifiable. Decentralizing IoT networks, then securing their communications with blockchain can allow to remain scalable, private, and reliable. Enabling blockchain’s peer-to-peer, trustless communication may also enable smart devices to initiate personal data exchanges like financial transactions, as centralized authorities or intermediaries will not be necessary.

Shrier, David, Weige Wu and Alex Pentland. “Blockchain & Infrastructure (Identity, Data Security).” Massachusetts Institute of Technology. May 17, 2016.

This paper, the third of a four-part series on potential blockchain applications, covers the potential of blockchains to change the status quo of identity authentication systems, privacy protection, transaction monitoring, ownership rights, and data security. The paper also posits that, as personal data becomes more and more valuable, that we should move towards a “New Deal on Data” which provides individuals data protection–through blockchain technology– and the option to contribute their data to aggregates that work towards the common good. In order to achieve this New Deal on Data, robust regulatory standards and financial incentives must be provided to entice individuals to share their data to benefit society.

Cross-sector Collaboration in Data Science for Social Good: Opportunities, Challenges, and Open Questions Raised by Working with Academic Researchers


Paper by presented by Anissa Tanweer and Brittany Fiore-Gartland at the Data Science for Social Good Conference: “Recent years have seen growing support for attempts to solve complex social problems through the use of increasingly available, increasingly combinable, and increasingly computable digital data. Sometimes referred to as “data science for social good” (DSSG), these efforts are not concentrated in the hands of any one sector of society. Rather, we see DSSG emerging as an inherently multi-sector and collaborative phenomenon, with key participants hailing from governments, nonprofit organizations, technology companies, and institutions of higher education. Based on three years of participant observation in a university-hosted DSSG program, in this paper we highlight academic contributions to multi-sector DSSG collaborations, including expertise, labor, ethics, experimentation, and neutrality. After articulating both the opportunities and challenges that accompany those contributions, we pose some key open questions that demand attention from participants in DSSG programs and projects. Given the emergent nature of the DSSG phenomenon, it is our contention that how these questions come to be answered will have profound implications for the way society is organized and governed….(More)”.

Let’s create a nation of social scientists


Geoff Mulgan in Times Higher Education: “How might social science become more influential, more relevant and more useful in the years to come?

Recent debates about impact have largely assumed a model of social science in which a cadre of specialists, based in universities, analyse and interpret the world and then feed conclusions into an essentially passive society. But a very different view sees specialists in the academy working much more in partnership with a society that is itself skilled in social science, able to generate hypotheses, gather data, experiment and draw conclusions that might help to answer the big questions of our time, from the sources of inequality to social trust, identity to violence.

There are some powerful trends to suggest that this second view is gaining traction. The first of these is the extraordinary explosion of new ways to observe social phenomena. Every day each of us leaves behind a data trail of who we talk to, what we eat and where we go. It’s easier than ever to survey people, to spot patterns, to scrape the web or to pick up data from sensors. It’s easier than ever to gather perceptions and emotions as well as material facts and easier than ever for organisations to practice social science – whether investment organisations analysing market patterns, human resources departments using behavioural science, or local authorities using ethnography.

That deluge of data is a big enough shift on its own. However, it is also now being used to feed interpretive and predictive tools using artificial intelligence to predict who is most likely to go to hospital, to end up in prison, which relationships are most likely to end in divorce.

Governments are developing their own predictive tools, and have also become much more interested in systematic experimentation, with Finland and Canada in the lead,  moving us closer to Karl Popper’s vision of “methods of trial and error, of inventing hypotheses which can be practically tested…”…

The second revolution is less visible but could be no less profound. This is the hunger of many people to be creators of knowledge, not just users; to be part of a truly collective intelligence. At the moment this shift towards mass engagement in knowledge is most visible in neighbouring fields.  Digital humanities mobilise many volunteers to input data and interpret texts – for example making ancient Arabic texts machine-readable. Even more striking is the growth of citizen science – eBird had 1.5 million reports last January; some 1.5 million people in the US monitor river streams and lakes, and SETI@home has 5 million volunteers. Thousands of patients also take part in funding and shaping research on their own conditions….

We’re all familiar with the old idea that it’s better to teach a man to fish than just to give him fish. In essence these trends ask us a simple question: why not apply the same logic to social science, and why not reorient social sciences to enhance the capacity of society itself to observe, analyse and interpret?…(More)”.