What Would More Democratic A.I. Look Like?


Blog post by Andrew Burgess: “Something curious is happening in Finland. Though much of the global debate around artificial intelligence (A.I.) has become concerned with unaccountable, proprietary systems that could control our lives, the Finnish government has instead decided to embrace the opportunity by rolling out a nationwide educational campaign.

Conceived in 2017, shortly after Finland’s A.I. strategy was announced, the government wants to rebuild the country’s economy around the high-end opportunities of artificial intelligence, and has launched a national programto train 1 percent of the population — that’s 55,000 people — in the basics of A.I. “We’ll never have so much money that we will be the leader of artificial intelligence,” said economic minister Mika Lintilä at the launch. “But how we use it — that’s something different.”

Artificial intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields, and improve traffic efficiency.

But some believe that A.I. expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted away from projects that could be more socially, rather than commercially, beneficial. Finland’s approach of making A.I. accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit.

This shift toward “democratic A.I.” has three main principles: that all society will be impacted by A.I. and therefore its creators have a responsibility to build open, fair, and explainable A.I. services; that A.I. should be used for social benefit and not just for private profit; and that because A.I. learns from vast quantities of data, the citizens who create that data — about their shopping habits, health records, or transport needs — have a right to say and understand how it is used.

A growing movement across industry and academia believes that A.I. needs to be treated like any other “public awareness” program — just like the scheme rolled out in Finland….(More)”.

Data Trusts as an AI Governance Mechanism


Paper by Chris Reed and Irene YH Ng: “This paper is a response to the Singapore Personal Data Protection Commission consultation on a draft AI Governance Framework. It analyses the five data trust models proposed by the UK Open Data Institute and identifies that only the contractual and corporate models are likely to be legally suitable for achieving the aims of a data trust.

The paper further explains how data trusts might be used as in the governance of AI, and investigates the barriers which Singapore’s data protection law presents to the use of data trusts and how those barriers might be overcome. Its conclusion is that a mixed contractual/corporate model, with an element of regulatory oversight and audit to ensure consumer confidence that data is being used appropriately, could produce a useful AI governance tool…(More)”.

Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again


Book by Eric Topol: “Medicine has become inhuman, to disastrous effect. The doctor-patient relationship–the heart of medicine–is broken: doctors are too distracted and overwhelmed to truly connect with their patients, and medical errors and misdiagnoses abound. In Deep Medicine, leading physician Eric Topol reveals how artificial intelligence can help. AI has the potential to transform everything doctors do, from notetaking and medical scans to diagnosis and treatment, greatly cutting down the cost of medicine and reducing human mortality. By freeing physicians from the tasks that interfere with human connection, AI will create space for the real healing that takes place between a doctor who can listen and a patient who needs to be heard. Innovative, provocative, and hopeful, Deep Medicine shows us how the awesome power of AI can make medicine better, for all the humans involved….(More)”.

Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems


Introduction by A.F. Winfield, K. Michael, J. Pitt, V. Evers of Special Issue of Proceedings of the IEEE: “…The primary focus of this special issue is machine ethics, that is the question of how autonomous systems can be imbued with ethical values. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences. This special issue includes papers that describe both implicit ethical agents, that is machines designed to avoid unethical outcomes, and explicit ethical agents: machines which either encode or learn ethics and determine actions based on those ethics. Of course ethical machines are socio-technical systems thus, as a secondary focus, this issue includes papers that explore the societal and regulatory implications of machine ethics, including the question of ethical governance. Ethical governance is needed in order to develop standards and processes that allow us to transparently and robustly assure the safety of ethical autonomous systems and hence build public trust and confidence….(More)?

Regulating disinformation with artificial intelligence


Paper for the European Parliamentary Research Service: “This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.

Chapter 1 introduces the background to the study and presents the definitions used. Chapter 2 scopes the policy boundaries of disinformation from economic, societal and technological perspectives, focusing on the media context, behavioural economics and technological regulation. Chapter 3 maps and evaluates existing regulatory and technological responses to disinformation. In Chapter 4, policy options are presented, paying particular attention to interactions between technological solutions, freedom of expression and media pluralism….(More)”.

Is Ethical A.I. Even Possible?


Cade Metz at The New York Times: ” When a news article revealed that Clarifaiwas working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation....

As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.

All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.

As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”

Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.

Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.

After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests….

In their open letter, the Clarifai employees said they were unsure whether regulation was the answer to the many ethical questions swirling around A.I. technology, arguing that the immediate responsibility rested with the company itself….(More)”.

Algorithmic fairness: A code-based primer for public-sector data scientists


Paper by Ken Steif and Sydney Goldstein: “As the number of government algorithms grow, so does the need to evaluate algorithmic fairness. This paper has three goals. First, we ground the notion of algorithmic fairness in the context of disparate impact, arguing that for an algorithm to be fair, its predictions must generalize across different protected groups. Next, two algorithmic use cases are presented with code examples for how to evaluate fairness. Finally, we promote the concept of an open source repository of government algorithmic “scorecards,” allowing stakeholders to compare across algorithms and use cases….(More)”.

Governance of artificial intelligence and personal health information


Jenifer Sunrise Winter in Digital Policy, Regulation and Governance: “This paper aims to assess the increasing challenges to governing the personal health information (PHI) essential for advancing artificial intelligence (AI) machine learning innovations in health care. Risks to privacy and justice/equity are discussed, along with potential solutions….

This paper argues that these characteristics of machine learning will overwhelm existing data governance approaches such as privacy regulation and informed consent. Enhanced governance techniques and tools will be required to help preserve the autonomy and rights of individuals to control their PHI. Debate among all stakeholders and informed critique of how, and for whom, PHI-fueled health AI are developed and deployed are needed to channel these innovations in societally beneficial directions.

Health data may be used to address pressing societal concerns, such as operational and system-level improvement, and innovations such as personalized medicine. This paper informs work seeking to harness these resources for societal good amidst many competing value claims and substantial risks for privacy and security….(More).

Claudette: an automated detector of potentially unfair clauses in online terms of service


Marco Lippi et al in AI and the Law Journal: “Terms of service of on-line platforms too often contain clauses that are potentially unfair to the consumer. We present an experimental study where machine learning is employed to automatically detect such potentially unfair clauses. Results show that the proposed system could provide a valuable tool for lawyers and consumers alike….(More)”.

Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice


Paper by Rashida Richardson, Jason Schultz, and Kate Crawford: “Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.

In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data used to train or implement these systems across thirteen case studies. We highlight three of these: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices suggests an extremely high risk that dirty data was or will be used in any predictive policing application, and (3) Maricopa County where despite extensive evidence of dirty policing practices, lack of transparency and public accountability surrounding predictive policing inhibits the public from assessing the risks of dirty data within such systems. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. Thus, for any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with skepticism and mechanisms for the public to examine and reject such systems are imperative….(More)”.