Introduction to the Special Issue of the Philosophical Transactions of the Royal Society by Sandra Wachter, Brent Mittelstadt, Luciano Floridi and Corinne Cath: “Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance…(More)”.
Governing Artificial Intelligence: Upholding Human Rights & Dignity
Report by Mark Latonero that “…shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.
The report draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights.
This report is intended for stakeholders–such as technology companies, governments, intergovernmental organizations, civil society groups, academia, and the United Nations (UN) system–looking to incorporate human rights into social and organizational contexts related to the development and governance of AI….(More)”.
A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI
Paper by Sandra Wachter and Brent Mittelstadt: “Big Data analytics and artificial intelligence (AI) draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals. These inferences draw on highly diverse and feature-rich data of unpredictable value, and create new opportunities for discriminatory, biased, and invasive decision-making. Concerns about algorithmic accountability are often actually concerns about the way in which these technologies draw privacy invasive and non-verifiable inferences about us that we cannot predict, understand, or refute.
Data protection law is meant to protect people’s privacy, identity, reputation, and autonomy, but is currently failing to protect data subjects from the novel risks of inferential analytics. The broad concept of personal datain Europe could be interpreted to include inferences, predictions, and assumptions that refer to or impact on an individual. If seen as personal data, individuals are granted numerous rights under data protection law. However, the legal status of inferences is heavily disputed in legal scholarship, and marked by inconsistencies and contradictions within and between the views of the Article 29 Working Party and the European Court of Justice.
As we show in this paper, individuals are granted little control and oversight over how their personal data is used to draw inferences about them. Compared to other types of personal data, inferences are effectively ‘economy class’ personal data in the General Data Protection Regulation (GDPR). Data subjects’ rights to know about (Art 13-15), rectify (Art 16), delete (Art 17), object to (Art 21), or port (Art 20) personal data are significantly curtailed when it comes to inferences, often requiring a greater balance with controller’s interests (e.g. trade secrets, intellectual property) than would otherwise be the case. Similarly, the GDPR provides insufficient protection against sensitive inferences (Art 9) or remedies to challenge inferences or important decisions based on them (Art 22(3))….
In this paper we argue that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences’ , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based. In cases where algorithms draw ‘high risk inferences’ about individuals, this right would require ex-ante justification to be given by the data controller to establish whether an inference is reasonable. This disclosure would address (1) why certain data is a relevant basis to draw inferences; (2) why these inferences are relevant for the chosen processing purpose or type of automated decision; and (3) whether the data and methods used to draw the inferences are accurate and statistically reliable. The ex-ante justification is bolstered by an additional ex-post mechanism enabling unreasonable inferences to be challenged. A right to reasonable inferences must, however, be reconciled with EU jurisprudence and counterbalanced with IP and trade secrets law as well as freedom of expression and Article 16 of the EU Charter of Fundamental Rights: the freedom to conduct a business….(More)”.
The free flow of non-personal data
Joint statement by Vice-President Ansip and Commissioner Gabriel on the European Parliament’s vote on the new EU rules facilitating the free flow of non-personal data: “The European Parliament adopted today a Regulation on the free flow of non-personal data proposed by the European Commission in September 2017. …
“We welcome today’s vote at the European Parliament. A digital economy and society cannot exist without data and this Regulation concludes another key pillar of the Digital Single Market. Only if data flows freely can Europe get the best from the opportunities offered by digital progress and technologies such as artificial intelligence and supercomputers.
This Regulation does for non-personal data what the General Data Protection Regulation has already done for personal data: free and safe movement across the European Union.
With its vote, the European Parliament has sent a clear signal to all businesses of Europe: it makes no difference where in the EU you store and process your data – data localisation requirements within the Member States are a thing of the past.
The new rules will provide a major boost to the European data economy, as it opens up potential for European start-ups and SMEs to create new services through cross-border data innovation. This could lead to a 4% – or €739 billion – higher EU GDP until 2020 alone.
Together with the General Data Protection Regulation, the Regulation on the free flow of non-personal data will allow the EU to fully benefit from today’s and tomorrow’s data-based global economy.”
Background
Since the Communication on the European Data Economy was adopted in January 2017 as part of the Digital Single Market strategy, the Commission has run a public online consultation, organised structured dialogues with Member States and has undertaken several workshops with different stakeholders. These evidence-gathering initiatives have led to the publication of an impact assessment….The Regulation on the free flow of non-personal data has no impact on the application of the General Data Protection Regulation (GDPR), as it does not cover personal data. However, the two Regulations will function together to enable the free flow of any data – personal and non-personal – thus creating a single European space for data. In the case of a mixed dataset, the GDPR provision guaranteeing free flow of personal data will apply to the personal data part of the set, and the free flow of non-personal data principle will apply to the non-personal part. …(More)”.
Text Analysis Systems Mine Workplace Emails to Measure Staff Sentiments
Alan Rothman at LLRX: “…For all of these good, bad or indifferent workplaces, a key question is whether any of the actions of management to engage the staff and listen to their concerns ever resulted in improved working conditions and higher levels of job satisfaction?
The answer is most often “yes”. Just having a say in, and some sense of control over, our jobs and workflows can indeed have a demonstrable impact on morale, camaraderie and the bottom line. As posited in the Hawthorne Effect, also termed the “Observer Effect”, this was first discovered during studies in the 1920’s and 1930’s when the management of a factory made improvements to the lighting and work schedules. In turn, worker satisfaction and productivity temporarily increased. This was not so much because there was more light, but rather, that the workers sensed that management was paying attention to, and then acting upon, their concerns. The workers perceived they were no longer just cogs in a machine.
Perhaps, too, the Hawthorne Effect is in some ways the workplace equivalent of the Heisenberg’s Uncertainty Principle in physics. To vastly oversimplify this slippery concept, the mere act of observing a subatomic particle can change its position.¹
Giving the processes of observation, analysis and change at the enterprise level a modern (but non-quantum) spin, is a fascinating new article in the September 2018 issue of The Atlantic entitled What Your Boss Could Learn by Reading the Whole Company’s Emails, by Frank Partnoy. I highly recommend a click-through and full read if you have an opportunity. I will summarize and annotate it, and then, considering my own thorough lack of understanding of the basics of y=f(x), pose some of my own physics-free questions….
Today the text analytics business, like the work done by KeenCorp, is thriving. It has been long-established as the processing behind email spam filters. Now it is finding other applications including monitoring corporate reputations on social media and other sites.²
The finance industry is another growth sector, as investment banks and hedge funds scan a wide variety of information sources to locate “slight changes in language” that may point towards pending increases or decreases in share prices. Financial research providers are using artificial intelligence to mine “insights” from their own selections of news and analytical sources.
But is this technology effective?
In a paper entitled Lazy Prices, by Lauren Cohen (Harvard Business School and NBER), Christopher Malloy (Harvard Business School and NBER), and Quoc Nguyen (University of Illinois at Chicago), in a draft dated February 22, 2018, these researchers found that the share price of company, in this case NetApp in their 2010 annual report, measurably went down after the firm “subtly changes” its reporting “descriptions of certain risks”. Algorithms can detect such changes more quickly and effectively than humans. The company subsequently clarified in its 2011 annual report their “failure to comply” with reporting requirements in 2010. A highly skilled stock analyst “might have missed that phrase”, but once again its was captured by “researcher’s algorithms”.
In the hands of a “skeptical investor”, this information might well have resulted in them questioning the differences in the 2010 and 2011 annual reports and, in turn, saved him or her a great deal of money. This detection was an early signal of a looming decline in NetApp’s stock. Half a year after the 2011 report’s publication, it was reported that the Syrian government has bought the company and “used that equipment to spy on its citizen”, causing further declines.
Now text analytics is being deployed at a new target: The composition of employees’ communications. Although it has been found that workers have no expectations of privacy in their workplaces, some companies remain reluctant to do so because of privacy concerns. Thus, companies are finding it more challenging to resist the “urge to mine employee information”, especially as text analysis systems continue to improve.
Among the evolving enterprise applications are the human resources departments in assessing overall employee morale. For example, Vibe is such an app that scans through communications on Slack, a widely used enterprise platform. Vibe’s algorithm, in real-time reporting, measures the positive and negative emotions of a work team….(More)”.
What is machine learning?
Chris Meserole at Brookings: “In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term “artificial intelligence” to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observations—to demonstrate, that is, an innate intelligence.
The question was how to achieve that goal. Early efforts focused primarily on what’s known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950s—one of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as “machine learning”—it wasn’t until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.
The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, it’s not possible to tease out the implications of AI without understanding how machine learning works—as well as how it doesn’t….(More)”.
Senators introduce the ‘Artificial Intelligence in Government Act’
Tajha Chappellet-Lanier at FedScoop: “A cadre of senators is looking to prompt the federal government to be a bit more proactive in utilizing artificial intelligence technologies.
To this end, the bipartisan group including Sens. Brian Schatz, D-Hawaii, Cory Gardner, R-Colo., Rob Portman, R-Ohio, and Kamala Harris, D-Calif., introduced the Artificial Intelligence in Government Acton Wednesday. Per a news release, the bill would seek to “improve the use of AI across the federal government by providing resources and directing federal agencies to include AI in data-related planning.”
The bill aims to do a number of things, including establishing an AI in government advisory board, directing the White House Office of Management and Budget to look into AI as part of the federal data strategy, getting the Office of Personnel Management to look at what kinds of employee skills are necessary for AI competence in government and expanding “an office” at the General Services Administration that will provide expertise, do research and “promote U.S. competitiveness.”
“Artificial intelligence has the potential to benefit society in ways we cannot imagine today,” Harris said in a statement. “We already see its immense value in applications as diverse as diagnosing cancer to routing vehicles. The AI in Government Act gives the federal government the tools and resources it needs to build its expertise and in partnership with industry and academia. The bill will help develop the policies to ensure that society reaps the benefits of these emerging technologies, while protecting people from potential risks, such as biases in AI.”
The proposed legislation is supported by a bunch of companies and advocacy groups in the tech space including BSA, the Center for Democracy and Technology, the Information Technology and Innovation Foundation, Intel, the Internet Association, the Lincoln Network, Microsoft, the Niskanen Center, and the R Street Institute.
The senators are hardly alone in their conviction that AI will be a powerful tool for government. At a summit in May, the White House Office of Science and Technology Policy created a Select Committee on artificial intelligence, comprised of senior research and development officials from across the government….(More)”.
Future Politics: Living Together in a World Transformed by Tech
Book by Jamie Susskind: “Future Politics confronts one of the most important questions of our time: how will digital technology transform politics and society? The great political debate of the last century was about how much of our collective life should be determined by the state and what should be left to the market and civil society. In the future, the question will be how far our lives should be directed and controlled by powerful digital systems — and on what terms?
Jamie Susskind argues that rapid and relentless innovation in a range of technologies — from artificial intelligence to virtual reality — will transform the way we live together. Calling for a fundamental change in the way we think about politics, he describes a world in which certain technologies and platforms, and those who control them, come to hold great power over us. Some will gather data about our lives, causing us to avoid conduct perceived as shameful, sinful, or wrong. Others will filter our perception of the world, choosing what we know, shaping what we think, affecting how we feel, and guiding how we act. Still others will force us to behave certain ways, like self-driving cars that refuse to drive over the speed limit.
Those who control these technologies — usually big tech firms and the state — will increasingly control us. They will set the limits of our liberty, decreeing what we may do and what is forbidden. Their algorithms will resolve vital questions of social justice, allocating social goods and sorting us into hierarchies of status and esteem. They will decide the future of democracy, causing it to flourish or decay.
A groundbreaking work of political analysis, Future Politics challenges readers to rethink what it means to be free or equal, what it means to have power or property, what it means for a political system to be just or democratic, and proposes ways in which we can — and must — regain control….(More)”.
Google is using AI to predict floods in India and warn users
James Vincent at The Verge: “For years Google has warned users about natural disasters by incorporating alerts from government agencies like FEMA into apps like Maps and Search. Now, the company is making predictions of its own. As part of a partnership with the Central Water Commission of India, Google will now alert users in the country about impending floods. The service is only currently available in the Patna region, with the first alert going out earlier this month.
As Google’s engineering VP Yossi Matias outlines in a blog post, these predictions are being made using a combination of machine learning, rainfall records, and flood simulations.
“A variety of elements — from historical events, to river level readings, to the terrain and elevation of a specific area — feed into our models,” writes Matias. “With this information, we’ve created river flood forecasting models that can more accurately predict not only when and where a flood might occur, but the severity of the event as well.”
The US tech giant announced its partnership with the Central Water Commission back in June. The two organizations agreed to share technical expertise and data to work on the predictions, with the Commission calling the collaboration a “milestone in flood management and in mitigating the flood losses.” Such warnings are particularly important in India, where 20 percent of the world’s flood-related fatalities are estimated to occur….(More)”.
How AI Addresses Unconscious Bias in the Talent Economy
Announcement by Bob Schultz at IBM: “The talent economy is one of the great outcomes of the digital era — and the ability to attract and develop the right talent has become a competitive advantage in most industries. According to a recent IBM study, which surveyed over 2,100 Chief Human Resource Officers, 33 percent of CHROs believe AI will revolutionize the way they do business over the next few years. In that same study, 65 percent of CEOs expect that people skills will have a strong impact on their businesses over the next several years. At IBM, we see AI as a tremendous untapped opportunity to transform the way companies attract, develop, and build the workforce for the decades ahead.
Consider this: The average hiring manager has hundreds of applicants a day for key positions and spends approximately six seconds on each resume. The ability to make the right decision without analytics and AI’s predictive abilities is limited and has the potential to create unconscious bias in hiring.
That is why today, I am pleased to announce the rollout of IBM Watson Recruitment’s Adverse Impact Analysis capability, which identifies instances of bias related to age, gender, race, education, or previous employer by assessing an organization’s historical hiring data and highlighting potential unconscious biases. This capability empowers HR professionals to take action against potentially biased hiring trends — and in the future, choose the most promising candidate based on the merit of their skills and experience alone. This announcement is part of IBM’s largest ever AI toolset release, tailor made for nine industries and professions where AI will play a transformational role….(More)”.