A literature review by the Joint Research Center: “Artificial intelligence has entered into the sphere of creativity and ingenuity. Recent headlines refer to paintings produced by machines, music performed or composed by algorithms or drugs discovered by computer programs. This paper discusses the possible implications of the development and adoption of this new technology in the intellectual property framework and presents the opinions expressed by practitioners and legal scholars in recent publications. The literature review, although not intended to be exhaustive, reveals a series of questions that call for further reflection. These concern the protection of artificial intelligence by intellectual property, the use of data to feed algorithms, the protection of the results generated by intelligent machines as well as the relationship between ethical requirements of transparency and explainability and the interests of rights holders….(More)”.
Machine Learning Shows Social Media Greatly Affects COVID-19 Beliefs
Jessica Kent at HealthITAnalytics: “Using machine learning, researchers found that people’s biases about COVID-19 and its treatments are exacerbated when they read tweets from other users, a study published in JMIR showed.
The analysis also revealed that scientific events, like scientific publications, and non-scientific events, like speeches from politicians, equally influence health belief trends on social media.
The rapid spread of COVID-19 has resulted in an explosion of accurate and inaccurate information related to the pandemic – mainly across social media platforms, researchers noted.
“In the pandemic, social media has contributed to much of the information and misinformation and bias of the public’s attitude toward the disease, treatment and policy,” said corresponding study author Yuan Luo, chief Artificial Intelligence officer at the Institute for Augmented Intelligence in Medicine at Northwestern University Feinberg School of Medicine.
“Our study helps people to realize and re-think the personal decisions that they make when facing the pandemic. The study sends an ‘alert’ to the audience that the information they encounter daily might be right or wrong, and guide them to pick the information endorsed by solid scientific evidence. We also wanted to provide useful insight for scientists or healthcare providers, so that they can more effectively broadcast their voice to targeted audiences.”…(More)”.
Leveraging artificial intelligence to analyze citizens’ opinions on urban green space
Paper by Mohammadhossein Ghahramani, Nadina J.Galle, Fábio Duarte, Carlo Ratti, Francesco Pilla: “Continued population growth and urbanization is shifting research to consider the quality of urban green space over the quantity of these parks, woods, and wetlands. The quality of urban green space has been hitherto measured by expert assessments, including in-situ observations, surveys, and remote sensing analyses. Location data platforms, such as TripAdvisor, can provide people’s opinion on many destinations and experiences, including UGS. This paper leverages Artificial Intelligence techniques for opinion mining and text classification using such platform’s reviews as a novel approach to urban green space quality assessments. Natural Language Processing is used to analyze contextual information given supervised scores of words by implementing computational analysis. Such an application can support local authorities and stakeholders in their understanding of–and justification for–future investments in urban green space….(More)”.
Ethical Machines: The Human-centric Use of Artificial Intelligence
Paper by B.Lepri, N.Oliver, and A.Pentland: “Today’s increased availability of large amounts of human behavioral data and advances in Artificial Intelligence are contributing to a growing reliance on algorithms to make consequential decisions for humans, including those related to access to credit or medical treatments, hiring, etc. Algorithmic decision-making processes might lead to more objective decisions than those made by humans who may be influenced by prejudice, conflicts of interest, or fatigue. However, algorithmic decision-making has been criticized for its potential to lead to privacy invasion, information asymmetry, opacity, and discrimination. In this paper, we describe available technical solutions in three large areas that we consider to be of critical importance to achieve a human-centric AI: (1) privacy and data ownership; (2) accountability and transparency; and (3) fairness. We also highlight the criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy makers, and citizens to co-develop and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness, accountability and transparency while respecting privacy….(More)”.
Building trust in AI systems is essential
Editorial Board of the Financial Times: “…Most of the biggest tech companies, which have been at the forefront of the AI revolution, are well aware of the risks of deploying flawed systems at scale. Tech companies publicly acknowledge the need for societal acceptance if their systems are to be trusted. Although historically allergic to government intervention, some industry bosses are even calling for stricter regulation in areas such as privacy and facial recognition technology.
A parallel is often drawn between two conferences held in Asilomar, California, in 1975 and 2017. At the first, a group of biologists, lawyers and doctors created a set of ethical guidelines around research into recombinant DNA. This opened an era of responsible and fruitful biomedical research that has helped us deal with the Covid-19 pandemic today. Inspired by the example, a group of AI experts repeated the exercise 42 years later and came up with an impressive set of guidelines for the beneficial use of the technology.
Translating such high principles into everyday practice is hard, especially when so much money is at stake. But three rules should always apply. First, teams that develop AI systems must be as diverse as possible to reduce the risk of bias. Second, complex AI systems should never be deployed in any field unless they offer a demonstrable improvement on what already exists. Third, algorithms that companies and governments deploy in sensitive areas such as healthcare, education, policing, justice and workplace monitoring should be subject to audit and comprehension by outside experts.
The US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the US Food and Drug Administration to preapprove the use of AI in sensitive areas. Criminal liability for those who deploy irresponsible AI systems might also help concentrate minds.
The AI industry has talked a good game about AI ethics. But if some of the most sophisticated companies in this field cannot even convince their own employees of their good intentions, they will struggle to convince anyone else. That could result in a fierce public backlash against companies using AI. Worse, it may yet impede the real benefits of using AI for societal good in areas such as healthcare. The tech sector has to restore credibility for all our sakes….(More)”
How Humans Judge Machines
Open Access Book by César A. Hidalgo et al : “How would you feel about losing your job to a machine? How about a tsunami alert system that fails? Would you react differently to acts of discrimination depending on whether they were carried out by a machine or by a human? What about public surveillance? How Humans Judge Machines compares people’s reactions to actions performed by humans and machines. Using data collected in dozens of experiments, this book reveals the biases that permeate human-machine interactions. Are there conditions in which we judge machines unfairly?
Is our judgment of machines affected by the moral dimensions of a scenario? Is our judgment of machine correlated with demographic factors such as education or gender? César Hidalgo and colleagues use hard science to take on these pressing technological questions. Using randomized experiments, they create revealing counterfactuals and build statistical models to explain how people judge artificial intelligence and whether they do it fairly. Through original research, How Humans Judge Machines bring us one step closer to understanding the ethical consequences of AI…(More)”.
How One State Managed to Actually Write Rules on Facial Recognition
Kashmir Hill at The New York Times: “Though police have been using facial recognition technology for the last two decades to try to identify unknown people in their investigations, the practice of putting the majority of Americans into a perpetual photo lineup has gotten surprisingly little attention from lawmakers and regulators. Until now.
Lawmakers, civil liberties advocates and police chiefs have debated whether and how to use the technology because of concerns about both privacy and accuracy. But figuring out how to regulate it is tricky. So far, that has meant an all-or-nothing approach. City Councils in Oakland, Portland, San Francisco, Minneapolis and elsewhere have banned police use of the technology, largely because of bias in how it works. Studies in recent years by MIT researchers and the federal government found that many facial recognition algorithms are most accurate for white men, but less so for everyone else.
At the same time, automated facial recognition has become a powerful investigative tool, helping to identify child molesters and, in a recent high-profile example, people who participated in the Jan. 6 riot at the Capitol. Law enforcement officials in Vermont want the state’s ban lifted because there “could be hundreds of kids waiting to be saved.”
That’s why a new law in Massachusetts is so interesting: It’s not all or nothing. The state managed to strike a balance on regulating the technology, allowing law enforcement to harness the benefits of the tool, while building in protections that might prevent the false arrests that have happened before….(More)”.
Artificial Intelligence as an Anti-Corruption Tool (AI-ACT)
Paper by Nils Köbis, Christopher Starke, and Iyad Rahwan: “Corruption continues to be one of the biggest societal challenges of our time. New hope is placed in Artificial Intelligence (AI) to serve as an unbiased anti-corruption agent. Ever more available (open) government data paired with unprecedented performance of such algorithms render AI the next frontier in anti-corruption. Summarizing existing efforts to use AI-based anti-corruption tools (AI-ACT), we introduce a conceptual framework to advance research and policy. It outlines why AI presents a unique tool for top-down and bottom-up anti-corruption approaches. For both approaches, we outline in detail how AI-ACT present different potentials and pitfalls for (a) input data, (b) algorithmic design, and (c) institutional implementation. Finally, we venture a look into the future and flesh out key questions that need to be addressed to develop AI-ACT while considering citizens’ views, hence putting “society in the loop”….(More)”.
A.I. Here, There, Everywhere
Craig S. Smith at the New York Times: “I wake up in the middle of the night. It’s cold.
“Hey, Google, what’s the temperature in Zone 2,” I say into the darkness. A disembodied voice responds: “The temperature in Zone 2 is 52 degrees.” “Set the heat to 68,” I say, and then I ask the gods of artificial intelligence to turn on the light.
Many of us already live with A.I., an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you’ve even stepped out of the house on a frigid morning.
But, while we’ve seen the A.I. sun, we have yet to see it truly shine.
Researchers liken the current state of the technology to cellphones of the 1990s: useful, but crude and cumbersome. They are working on distilling the largest, most powerful machine-learning models into lightweight software that can run on “the edge,” meaning small devices such as kitchen appliances or wearables. Our lives will gradually be interwoven with brilliant threads of A.I.
Our interactions with the technology will become increasingly personalized. Chatbots, for example, can be clumsy and frustrating today, but they will eventually become truly conversational, learning our habits and personalities and even develop personalities of their own. But don’t worry, the fever dreams of superintelligent machines taking over, like HAL in “2001: A Space Odyssey,” will remain science fiction for a long time to come; consciousness, self-awareness and free will in machines are far beyond the capabilities of science today.
Privacy remains an issue, because artificial intelligence requires data to learn patterns and make decisions. But researchers are developing methods to use our data without actually seeing it — so-called federated learning, for example — or encrypt it in ways that currently can’t be hacked….(More)”
Regulation of Algorithmic Tools in the United States
Paper by Christopher S. Yoo and Alicia Lai: “Policymakers in the United States have just begun to address regulation of artificial intelligence technologies in recent years, gaining momentum through calls for additional research funding, piece-meal guidance, proposals, and legislation at all levels of government. This Article provides an overview of high-level federal initiatives for general artificial intelligence (AI) applications set forth by the U.S. president and responding agencies, early indications from the incoming Biden Administration, targeted federal initiatives for sector-specific AI applications, pending federal legislative proposals, and state and local initiatives. The regulation of the algorithmic ecosystem will continue to evolve as the United States continues to search for the right balance between ensuring public safety and transparency and promoting innovation and competitiveness on the global stage….(More)”.