Book by Eric Topol: “Medicine has become inhuman, to disastrous effect. The doctor-patient relationship–the heart of medicine–is broken: doctors are too distracted and overwhelmed to truly connect with their patients, and medical errors and misdiagnoses abound. In Deep Medicine, leading physician Eric Topol reveals how artificial intelligence can help. AI has the potential to transform everything doctors do, from notetaking and medical scans to diagnosis and treatment, greatly cutting down the cost of medicine and reducing human mortality. By freeing physicians from the tasks that interfere with human connection, AI will create space for the real healing that takes place between a doctor who can listen and a patient who needs to be heard. Innovative, provocative, and hopeful, Deep Medicine shows us how the awesome power of AI can make medicine better, for all the humans involved….(More)”.
Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems
Introduction by A.F. Winfield, K. Michael, J. Pitt, V. Evers of Special Issue of Proceedings of the IEEE: “…The primary focus of this special issue is machine ethics, that is the question of how autonomous systems can be imbued with ethical values. Ethical autonomous systems are needed because, inevitably, near future systems are moral agents; consider driverless cars, or medical diagnosis AIs, both of which will need to make choices with ethical consequences. This special issue includes papers that describe both implicit ethical agents, that is machines designed to avoid unethical outcomes, and explicit ethical agents: machines which either encode or learn ethics and determine actions based on those ethics. Of
Regulating disinformation with artificial intelligence
Paper for the European Parliamentary Research Service: “This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.
Chapter 1 introduces the background to the study and presents the definitions used. Chapter 2 scopes the policy boundaries of disinformation from economic, societal and technological perspectives, focusing on the media context,
Is Ethical A.I. Even Possible?
Cade Metz at The New York Times: ” When a news article revealed that Clarifaiwas working with the Pentagon and some employees questioned the ethics of building
“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.
As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.
But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation
As companies and governments deploy these A.I. technologies, researchers are also realizing that some systems are woefully biased. Facial recognition services, for instance, can be significantly less accurate when trying to identify women or someone with darker skin. Other systems may include security holes unlike any seen in the past. Researchers have shown that driverless cars can be fooled into seeing things that are not really there.
All this means that building ethical artificial intelligence is an enormously complex task. It gets even harder when stakeholders realize that ethics are in the eye of the beholder.
As some Microsoft employees protest the company’s military contracts, Mr. Smith said that American tech companies had long supported the military and that they must continue to do so. “The U.S. military is charged with protecting the freedoms of this country,” he told the conference. “We have to stand by the people who are risking their lives.”
Though some Clarifai employees draw an ethical line at autonomous weapons, others do not. Mr. Zeiler argued that autonomous weapons will ultimately save lives because they would be more accurate than weapons controlled by human operators. “A.I. is an essential tool in helping weapons become more accurate, reducing collateral damage, minimizing civilian casualties and friendly fire incidents,” he said in a statement.
Google worked on the same Pentagon project as Clarifai, and after a protest from company employees, the tech giant ultimately ended its involvement. But like Clarifai, as many as 20 other companies have worked on the project without bowing to ethical concerns.
After the controversy over its Pentagon work, Google laid down a set of “A.I. principles” meant as a guide for future projects. But even with the corporate rules in place, some employees left the company in protest. The new principles are open to interpretation. And they are overseen by executives who must also protect the company’s financial interests….
In their open letter, the Clarifai employees said they were unsure whether
Algorithmic fairness: A code-based primer for public-sector data scientists
Paper by Ken Steif and Sydney Goldstein: “As the number of government algorithms
Governance of artificial intelligence and personal health information
Jenifer Sunrise Winter in Digital Policy, Regulation
This paper argues that these characteristics of machine learning will overwhelm existing data governance approaches such as privacy regulation and informed consent. Enhanced governance techniques and tools will be required to help preserve the autonomy and rights of individuals to control their PHI. Debate among all stakeholders and informed critique of how, and for whom, PHI-fueled health AI are developed and deployed are needed to channel these innovations in societally beneficial directions.
Health data may be used to address pressing societal concerns, such as operational and system-level improvement, and innovations such as personalized medicine. This paper informs work seeking to harness these resources for societal good amidst many competing value claims and substantial risks for privacy and security….(More).
Claudette: an automated detector of potentially unfair clauses in online terms of service
Marco Lippi et al in AI and the Law Journal: “Terms of service of
Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice
In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal
7 things we’ve learned about computer algorithms
Aaron Smith at Pew Research Center: “Algorithms are all around us, using massive stores of data and complex analytics to make decisions with often significant impacts on humans – from choosing the content people
- Algorithmically generated content platforms play a prominent role in Americans’ information diets. Sizable shares of U.S. adults now get news on sites like Facebook or YouTube that use algorithms to curate the content they show to their users. A study by the Center found that 81% of YouTube users say they at least occasionally watch the videos suggested by the platform’s recommendation
algorithm, and that these recommendations encourage users to watch progressively longer content as they click through the videos suggested by the site. - The inner workings of even the most common algorithms can be confusing to users. Facebook is among the most popular social media platforms, but roughly half of Facebook users – including six-in-ten users ages 50 and older – say they do not understand how the site’s algorithmically generated news feed selects which posts to show them. And around three-quarters of Facebook users are not aware that the site automatically estimates their interests and preferences based on their online behaviors in order to deliver them targeted advertisements and other content.
- The public is wary of computer algorithms being used to make decisions with real-world consequences. The public expresses widespread concern about companies and other institutions using computer algorithms in situations with potential impacts on people’s lives. More than half (56%) of U.S. adults think it is unacceptable to use automated criminal risk scores when evaluating people who are up for parole. And 68% think it is unacceptable for companies to collect large quantities of data about individuals for the purposes of offering them deals or other financial incentives. When asked to elaborate about their worries, many feel that these programs violate people’s privacy, are unfair, or simply will not work as well as decisions made by humans….(More)”.
Decoding Algorithms
Malcalester University: “Ada Lovelace probably didn’t foresee the impact of the mathematical formula she published in 1843, now considered the first computer algorithm.
Nor could she have anticipated today’s widespread use of algorithms, in applications as different as the 2016 U.S. presidential campaign and Mac’s first-year seminar registration. “Over the last decade algorithms have become embedded in every aspect of our lives,” says Shilad Sen, professor in Macalester’s Math, Statistics, and Computer Science (MSCS) Department.
How do algorithms shape our society? Why is it important to be aware of them? And for readers who don’t know, what is an algorithm, anyway?…(More)”.