Safe artificial intelligence requires cultural intelligence


Gillian Hadfield at TechCrunch: “Knowledge, to paraphrase British journalist Miles Kington, is knowing a tomato is a fruit; wisdom is knowing there’s a norm against putting it in a fruit salad.

Any kind of artificial intelligence clearly needs to possess great knowledge. But if we are going to deploy AI agents widely in society at large — on our highways, in our nursing homes and schools, in our businesses and governments — we will need machines to be wise as well as smart.

Researchers who focus on a problem known as AI safety or AI alignment define artificial intelligence as machines that can meet or beat human performance at a specific cognitive task. Today’s self-driving cars and facial recognition algorithms fall into this narrow type of AI.

But some researchers are working to develop artificial general intelligence (AGI) — machines that can outperform humans at any cognitive task. We don’t know yet when or even if AGI will be achieved, but it’s clear that the research path is leading to ever more powerful and autonomous AI systems performing more and more tasks in our economies and societies.

Building machines that can perform any cognitive task means figuring out how to build AI that can not only learn about things like the biology of tomatoes but also about our highly variable and changing systems of norms about things like what we do with tomatoes.

Humans live lives populated by a multitude of norms, from how we eat, dress and speak to how we share information, treat one another and pursue our goals.

For AI to be truly powerful will require machines to comprehend that norms can vary tremendously from group to group, making them seem unnecessary, yet it can be critical to follow them in a given community.

Tomatoes in fruit salads may seem odd to the Brits for whom Kington was writing, but they are perfectly fine if you are cooking for Koreans or a member of the culinary avant-garde.  And while it may seem minor, serving them the wrong way to a particular guest can cause confusion, disgust, even anger. That’s not a recipe for healthy future relationships….(More)”.

Constitutional Democracy and Technology in the age of Artificial Intelligence


Paul Nemitz at Royal Society Philosophical Transactions: “Given the foreseeable pervasiveness of Artificial Intelligence in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy.

This paper first describes the four core elements of today’s digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless internet and the relationship between technology and the law as it has developed in the internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws.

The paper closes with a call for a new culture of incorporating the principles of Democracy, Rule of law and Human Rights by design in AI and a three level technological impact assessment for new technologies like AI as a practical way forward for this purpose….(More).

Google launches new search engine to help scientists find the datasets they need


James Vincent at The Verge: “The service, called Dataset Search, launches today, and it will be a companion of sorts to Google Scholar, the company’s popular search engine for academic studies and reports. Institutions that publish their data online, like universities and governments, will need to include metadata tags in their webpages that describe their data, including who created it, when it was published, how it was collected, and so on. This information will then be indexed by Google’s search engine and combined with information from the Knowledge Graph. (So if dataset X was published by CERN, a little information about the institute will also be included in the search.)

Speaking to The Verge, Natasha Noy, a research scientist at Google AI who helped created Dataset Search, says the aim is to unify the tens of thousands of different repositories for datasets online. “We want to make that data discoverable, but keep it where it is,” says Noy.

At the moment, dataset publication is extremely fragmented. Different scientific domains have their own preferred repositories, as do different governments and local authorities. “Scientists say, ‘I know where I need to go to find my datasets, but that’s not what I always want,’” says Noy. “Once they step out of their unique community, that’s when it gets hard.”

Noy gives the example of a climate scientist she spoke to recently who told her she’d been looking for a specific dataset on ocean temperatures for an upcoming study but couldn’t find it anywhere. She didn’t track it down until she ran into a colleague at a conference who recognized the dataset and told her where it was hosted. Only then could she continue with her work. “And this wasn’t even a particularly boutique depository,” says Noy. “The dataset was well written up in a fairly prominent place, but it was still difficult to find.”

An example search for weather records in Google Dataset Search.
 Image: Google

The initial release of Dataset Search will cover the environmental and social sciences, government data, and datasets from news organizations like ProPublica. However, if the service becomes popular, the amount of data it indexes should quickly snowball as institutions and scientists scramble to make their information accessible….(More)”.

Reflecting the Past, Shaping the Future: Making AI Work for International Development


USAID Report: “We are in the midst of an unprecedented surge of interest in machine learning (ML) and artificial intelligence (AI) technologies. These tools, which allow computers to make data-derived predictions and automate decisions, have become part of daily life for billions of people. Ubiquitous digital services such as interactive maps, tailored advertisements, and voice-activated personal assistants are likely only the beginning. Some AI advocates even claim that AI’s impact will be as profound as “electricity or fire” that it will revolutionize nearly every field of human activity. This enthusiasm has reached international development as well. Emerging ML/AI applications promise to reshape healthcare, agriculture, and democracy in the developing world. ML and AI show tremendous potential for helping to achieve sustainable development objectives globally. They can improve efficiency by automating labor-intensive tasks, or offer new insights by finding patterns in large, complex datasets. A recent report suggests that AI advances could double economic growth rates and increase labor productivity 40% by 2035. At the same time, the very nature of these tools — their ability to codify and reproduce patterns they detect — introduces significant concerns alongside promise.

In developed countries, ML tools have sometimes been found to automate racial profiling, to foster surveillance, and to perpetuate racial stereotypes. Algorithms may be used, either intentionally or unintentionally, in ways that result in disparate or unfair outcomes between minority and majority populations. Complex models can make it difficult to establish accountability or seek redress when models make mistakes. These shortcomings are not restricted to developed countries. They can manifest in any setting, especially in places with histories of ethnic conflict or inequality. As the development community adopts tools enabled by ML and AI, we need a cleareyed understanding of how to ensure their application is effective, inclusive, and fair. This requires knowing when ML and AI offer a suitable solution to the challenge at hand. It also requires appreciating that these technologies can do harm — and committing to addressing and mitigating these harms.

ML and AI applications may sometimes seem like science fiction, and the technical intricacies of ML and AI can be off-putting for those who haven’t been formally trained in the field. However, there is a critical role for development actors to play as we begin to lean on these tools more and more in our work. Even without technical training in ML, development professionals have the ability — and the responsibility — to meaningfully influence how these technologies impact people.

You don’t need to be an ML or AI expert to shape the development and use of these tools. All of us can learn to ask the hard questions that will keep solutions working for, and not against, the development challenges we care about. Development practitioners already have deep expertise in their respective sectors or regions. They bring necessary experience in engaging local stakeholders, working with complex social systems, and identifying structural inequities that undermine inclusive progress. Unless this expert perspective informs the construction and adoption of ML/AI technologies, ML and AI will fail to reach their transformative potential in development.

This document aims to inform and empower those who may have limited technical experience as they navigate an emerging ML/AI landscape in developing countries. Donors, implementers, and other development partners should expect to come away with a basic grasp of common ML techniques and the problems ML is uniquely well-suited to solve. We will also explore some of the ways in which ML/AI may fail or be ill-suited for deployment in developing-country contexts. Awareness of these risks, and acknowledgement of our role in perpetuating or minimizing them, will help us work together to protect against harmful outcomes and ensure that AI and ML are contributing to a fair, equitable, and empowering future…(More)”.

AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment


Alessandro Mantelero in Computer Law & Security Review: “The use of algorithms in modern data processing techniques, as well as data-intensive technological trends, suggests the adoption of a broader view of the data protection impact assessment. This will force data controllers to go beyond the traditional focus on data quality and security, and consider the impact of data processing on fundamental rights and collective social and ethical values.

Building on studies of the collective dimension of data protection, this article sets out to embed this new perspective in an assessment model centred on human rights (Human Rights, Ethical and Social Impact Assessment-HRESIA). This self-assessment model intends to overcome the limitations of the existing assessment models, which are either too closely focused on data processing or have an extent and granularity that make them too complicated to evaluate the consequences of a given use of data. In terms of architecture, the HRESIA has two main elements: a self-assessment questionnaire and an ad hoc expert committee. As a blueprint, this contribution focuses mainly on the nature of the proposed model, its architecture and its challenges; a more detailed description of the model and the content of the questionnaire will be discussed in a future publication drawing on the ongoing research….(More)”.

Origin Privacy: Protecting Privacy in the Big-Data Era


Paper by Helen Nissenbaum, Sebastian Benthall, Anupam Datta, Michael Carl Tschantz, and Piot Mardziel: “Machine learning over big data poses challenges for our conceptualization of privacy. Such techniques can discover surprising and counteractive associations that take innocent looking data and turns it into important inferences about a person. For example, the buying carbon monoxide monitors has been linked to paying credit card bills, while buying chrome-skull car accessories predicts not doing so. Also, Target may have used the buying of scent-free hand lotion and vitamins as a sign that the buyer is pregnant. If we take pregnancy status to be private and assume that we should prohibit the sharing information that can reveal that fact, then we have created an unworkable notion of privacy, one in which sharing any scrap of data may violate privacy.

Prior technical specifications of privacy depend on the classification of certain types of information as private or sensitive; privacy policies in these frameworks limit access to data that allow inference of this sensitive information. As the above examples show, today’s data rich world creates a new kind of problem: it is difficult if not impossible to guarantee that information does notallow inference of sensitive topics. This makes information flow rules based on information topic unstable.

We address the problem of providing a workable definition of private data that takes into account emerging threats to privacy from large-scale data collection systems. We build on Contextual Integrity and its claim that privacy is appropriate information flow, or flow according to socially or legally specified rules.

As in other adaptations of Contextual Integrity (CI) to computer science, the parameterization of social norms in CI is translated into a logical specification. In this work, we depart from CI by considering rules that restrict information flow based on its origin and provenance, instead of on it’s type, topic, or subject.

We call this concept of privacy as adherence to origin-based rules Origin Privacy. Origin Privacy rules can be found in some existing data protection laws. This motivates the computational implementation of origin-based rules for the simple purpose of compliance engineering. We also formally model origin privacy to determine what security properties it guarantees relative to the concerns that motivate it….(More)”.

Biometric Mirror


University of Melbourne: “Biometric Mirror exposes the possibilities of artificial intelligence and facial analysis in public space. The aim is to investigate the attitudes that emerge as people are presented with different perspectives on their own, anonymised biometric data distinguished from a single photograph of their face. It sheds light on the specific data that people oppose and approve, the sentiments it evokes, and the underlying reasoning. Biometric Mirror also presents an opportunity to reflect on whether the plausible future of artificial intelligence is a future we want to see take shape.

Big data and artificial intelligence are some of today’s most popular buzzwords. Both are promised to help deliver insights that were previously too complex for computer systems to calculate. With examples ranging from personalised recommendation systems to automatic facial analyses, user-generated data is now analysed by algorithms to identify patterns and predict outcomes. And the common view is that these developments will have a positive impact on society.

Within the realm of artificial intelligence (AI), facial analysis gains popularity. Today, CCTV cameras and advertising screens increasingly link with analysis systems that are able to detect emotions, age, gender and demographic information of people passing by. It has proven to increase advertising effectiveness in retail environments, since campaigns can now be tailored to specific audience profiles and situations. But facial analysis models are also being developed to predict your aggression levelsexual preferencelife expectancy and likeliness of being a terrorist (or an academic) by simply monitoring surveillance camera footage or analysing a single photograph. Some of these developments have gained widespread media coverage for their innovative nature, but often the ethical and social impact is only a side thought.

Current technological developments approach ethical boundaries of the artificial intelligence age. Facial recognition and analysis in public space raise concerns as people are photographed without prior consent, and their photos disappear into a commercial operator’s infrastructure. It remains unclear how the data is processed, how the data is tailored for specific purposes and how the data is retained or disposed of. People also do not have the opportunity to review or amend their facial recognition data. Perhaps most worryingly, artificial intelligence systems may make decisions or deliver feedback based on the data, regardless of its accuracy or completeness. While facial recognition and analysis may be harmless for tailored advertising in retail environments or to unlock your phone, it quickly pushes ethical boundaries when the general purpose is to more closely monitor society… (More).

One of New York City’s most urgent design challenges is invisible


Diana Budds at Curbed: “Algorithms are invisible, but they already play a large role in shaping New York City’s built environment, schooling, public resources, and criminal justice system. Earlier this year, the City Council and Mayor Bill de Blasio formed the Automated Decision Systems Task Force, the first of its kind in the country, to analyze how NYC deploys automated systems to ensure fairness, equity, and accountability are upheld.

This week, 20 experts in the field of civil rights and artificial intelligence co-signed a letter to the task force to help influence its official report, which is scheduled to be published in December 2019.

The letter’s recommendations include creating a publicly accessible list of all the automated decision systems in use; consulting with experts before adopting an automated decision system; creating a permanent government body to oversee the procurement and regulation of automated decision systems; and upholding civil liberties in all matters related to automation. This could lay the groundwork for future legislation around automation in the city….Read the full letter here.”

An Overview of National AI Strategies


Medium Article by Tim Dutton: “The race to become the global leader in artificial intelligence (AI) has officially begun. In the past fifteen months, Canada, China, Denmark, the EU Commission, Finland, France, India, Italy, Japan, Mexico, the Nordic-Baltic region, Singapore, South Korea, Sweden, Taiwan, the UAE, and the UK have all released strategies to promote the use and development of AI. No two strategies are alike, with each focusing on different aspects of AI policy: scientific research, talent development, skills and education, public and private sector adoption, ethics and inclusion, standards and regulations, and data and digital infrastructure.

This article summarizes the key policies and goals of each strategy, as well as related policies and initiatives that have announced since the release of the initial strategies. It also includes countries that have announced their intention to develop a strategy or have related AI policies in place….(More)”.

Odd Numbers: Algorithms alone can’t meaningfully hold other algorithms accountable


Frank Pasquale at Real Life Magazine: “Algorithms increasingly govern our social world, transforming data into scores or rankings that decide who gets credit, jobs, dates, policing, and much more. The field of “algorithmic accountability” has arisen to highlight the problems with such methods of classifying people, and it has great promise: Cutting-edge work in critical algorithm studies applies social theory to current events; law and policy experts seem to publish new articles daily on how artificial intelligence shapes our lives, and a growing community of researchers has developed a field known as “Fairness, Accuracy, and Transparency in Machine Learning.”

The social scientists, attorneys, and computer scientists promoting algorithmic accountability aspire to advance knowledge and promote justice. But what should such “accountability” more specifically consist of? Who will define it? At a two-day, interdisciplinary roundtable on AI ethics I recently attended, such questions featured prominently, and humanists, policy experts, and lawyers engaged in a free-wheeling discussion about topics ranging from robot arms races to computationally planned economies. But at the end of the event, an emissary from a group funded by Elon Musk and Peter Thiel among others pronounced our work useless. “You have no common methodology,” he informed us (apparently unaware that that’s the point of an interdisciplinary meeting). “We have a great deal of money to fund real research on AI ethics and policy”— which he thought of as dry, economistic modeling of competition and cooperation via technology — “but this is not the right group.” He then gratuitously lashed out at academics in attendance as “rent seekers,” largely because we had the temerity to advance distinctive disciplinary perspectives rather than fall in line with his research agenda.

Most corporate contacts and philanthrocapitalists are more polite, but their sense of what is realistic and what is utopian, what is worth studying and what is mere ideology, is strongly shaping algorithmic accountability research in both social science and computer science. This influence in the realm of ideas has powerful effects beyond it. Energy that could be put into better public transit systems is instead diverted to perfect the coding of self-driving cars. Anti-surveillance activism transmogrifies into proposals to improve facial recognition systems to better recognize all faces. To help payday-loan seekers, developers might design data-segmentation protocols to show them what personal information they should reveal to get a lower interest rate. But the idea that such self-monitoring and data curation can be a trap, disciplining the user in ever finer-grained ways, remains less explored. Trying to make these games fairer, the research elides the possibility of rejecting them altogether….(More)”.