Public Attitudes Toward Computer Algorithms


Aaron Smith at the Pew Research Center: “Algorithms are all around us, utilizing massive stores of data and complex analytics to make decisions with often significant impacts on humans. They recommend books and movies for us to read and watch, surface news stories they think we might find relevant, estimate the likelihood that a tumor is cancerous and predict whether someone might be a criminal or a worthwhile credit risk. But despite the growing presence of algorithms in many aspects of daily life, a Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations.

This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do….

The following are among the major findings.

The public expresses broad concerns about the fairness and acceptability of using computers for decision-making in situations with important real-world consequences

Majorities of Americans find it unacceptable to use algorithms to make decisions with real-world consequences for humans

By and large, the public views these examples of algorithmic decision-making as unfair to the people the computer-based systems are evaluating. Most notably, only around one-third of Americans think that the video job interview and personal finance score algorithms would be fair to job applicants and consumers. When asked directly whether they think the use of these algorithms is acceptable, a majority of the public says that they are not acceptable. Two-thirds of Americans (68%) find the personal finance score algorithm unacceptable, and 67% say the computer-aided video job analysis algorithm is unacceptable….

Attitudes toward algorithmic decision-making can depend heavily on context

Despite the consistencies in some of these responses, the survey also highlights the ways in which Americans’ attitudes toward algorithmic decision-making can depend heavily on the context of those decisions and the characteristics of the people who might be affected….

When it comes to the algorithms that underpin the social media environment, users’ comfort level with sharing their personal information also depends heavily on how and why their data are being used. A 75% majority of social media users say they would be comfortable sharing their data with those sites if it were used to recommend events they might like to attend. But that share falls to just 37% if their data are being used to deliver messages from political campaigns.

Across age groups, social media users are comfortable with their data being used to recommend events - but wary of that data being used for political messaging

In other instances, different types of users offer divergent views about the collection and use of their personal data. For instance, about two-thirds of social media users younger than 50 find it acceptable for social media platforms to use their personal data to recommend connecting with people they might want to know. But that view is shared by fewer than half of users ages 65 and older….(More)”.

Data-Driven Development


Report by the World Bank: “…Decisions based on data can greatly improve people’s lives. Data can uncover patterns, unexpected relationships and market trends, making it possible to address previously intractable problems and leverage hidden opportunities. For example, tracking genes associated with certain types of cancer to improve treatment, or using commuter travel patterns to devise public transportation that is affordable and accessible for users, as well as profitable for operators.

Data is clearly a precious commodity, and the report points out that people should have greater control over the use of their personal data. Broadly speaking, there are three possible answers to the question “Who controls our data?”: firms, governments, or users. No global consensus yet exists on the extent to which private firms that mine data about individuals should be free to use the data for profit and to improve services.

User’s willingness to share data in return for benefits and free services – such as virtually unrestricted use of social media platforms – varies widely by country. In addition to that, early internet adopters, who grew up with the internet and are now age 30–40, are the most willing to share (GfK 2017).

Are you willing to share your data? (source: GfK 2017)

Image

On the other hand, data can worsen the digital divide – the data poor, who leave no digital trail because they have limited access, are most at risk from exclusion from services, opportunities and rights, as are those who lack a digital ID, for instance.

Firms and Data

For private sector firms, particularly those in developing countries, the report suggests how they might expand their markets and improve their competitive edge. Companies are already developing new markets and making profits by analyzing data to better understand their customers. This is transforming conventional business models. For years, telecommunications has been funded by users paying for phone calls. Today, advertisers pay for users’ data and attention are funding the internet, social media, and other platforms, such as apps, reversing the value flow.

Governments and Data

For governments and development professionals, the report provides guidance on how they might use data more creatively to help tackle key global challenges, such as eliminating extreme poverty, promoting shared prosperity, or mitigating the effects of climate change. The first step is developing appropriate guidelines for data sharing and use, and for anonymizing personal data. Governments are already beginning to use the huge quantities of data they hold to enhance service delivery, though they still have far to go to catch up with the commercial giants, the report finds.

Data for Development

The Information and Communications for Development report analyses how the data revolution is changing the behavior of governments, individuals, and firms and how these changes affect economic, social, and cultural development. This is a topic of growing importance that cannot be ignored, and the report aims to stimulate wider debate on the unique challenges and opportunities of data for development. It will be useful for policy makers, but also for anyone concerned about how their personal data is used and how the data revolution might affect their future job prospects….(More)”.

Artificial Intelligence: Risks to Privacy and Democracy


Karl Manheim and Lyric Kaplan at Yale Journal of Law and Technology: “A “Democracy Index” is published annually by the Economist. For 2017, it reported that half of the world’s countries scored lower than the previous year. This included the United States, which was demoted from “full democracy” to “flawed democracy.” The principal factor was “erosion of confidence in government and public institutions.” Interference by Russia and voter manipulation by Cambridge Analytica in the 2016 presidential election played a large part in that public disaffection.

Threats of these kinds will continue, fueled by growing deployment of artificial intelligence (AI) tools to manipulate the preconditions and levers of democracy. Equally destructive is AI’s threat to decisional andinforma-tional privacy. AI is the engine behind Big Data Analytics and the Internet of Things. While conferring some consumer benefit, their principal function at present is to capture personal information, create detailed behavioral profiles and sell us goods and agendas. Privacy, anonymity and autonomy are the main casualties of AI’s ability to manipulate choices in economic and political decisions.

The way forward requires greater attention to these risks at the nation-al level, and attendant regulation. In its absence, technology giants, all of whom are heavily investing in and profiting from AI, will dominate not only the public discourse, but also the future of our core values and democratic institutions….(More)”.

Declaration of Cities Coalition for Digital Rights


New York City, Barcelona and Amsterdam: “We, the undersigned cities, formally come together to form the Cities Coalition for Digital Rights, to protect and uphold human rights on the internet at the local and global level.

The internet has become inseparable from our daily lives. Yet, every day, there are new cases of digital rights abuse, misuse and misinformation and concentration of power around the world: freedom of expression being censored; personal information, including our movements and communications, monitored, being shared and sold without consent; ‘black box’ algorithms being used to make unaccountable decisions; social media being used as a tool of harassment and hate speech; and democratic processes and public opinion being undermined.

As cities, the closest democratic institutions to the people, we are committed to eliminating impediments to harnessing technological opportunities that improve the lives of our constituents, and to providing trustworthy and secure digital services and infrastructures that support our communities. We strongly believe that human rights principles such as privacy, freedom of expression, and democracy must be incorporated by design into digital platforms starting with locally-controlled digital infrastructures and services.

As a coalition, and with the support of the United Nations Human Settlements Program (UN-Habitat), we will share best practices, learn from each other’s challenges and successes, and coordinate common initiatives and actions. Inspired by the Internet Rights and Principles Coalition (IRPC), the work of 300 international stakeholders over the past ten years, we are committed to the following five evolving principles:

01.Universal and equal access to the internet, and digital literacy

02.Privacy, data protection and security

03.Transparency, accountability, and non-discrimination of data, content and algorithms

04.Participatory Democracy, diversity and inclusion

05.Open and ethical digital service standards”

Another Use for A.I.: Finding Millions of Unregistered Voters


Steve Lohr at The New York Times: “The mechanics of elections that attract the most attention are casting and counting, snafus with voting machines and ballots and allegations of hacking and fraud. But Jeff Jonas, a prominent data scientist, is focused on something else: the integrity, updating and expansion of voter rolls.

“As I dove into the subject, it grew on me, the complexity and relevance of the problem,” he said.

As a result, Mr. Jonas has played a geeky, behind-the-scenes role in encouraging turnout for the midterm elections on Tuesday.

For the last four years, Mr. Jonas has used his software for a multistate project known as Electronic Registration Information Center that identifies eligible voters and cleans up voter rolls. Since its founding in 2012, the nonprofit center has identified 26 million people who are eligible but unregistered to vote, as well as 10 million registered voters who have moved, appear on more than one list or have died.

“I have no doubt that more people are voting as a result of ERIC,” said John Lindback, a former senior election administrator in Oregon and Alaska who was the center’s first executive director.

Voter rolls, like nearly every aspect of elections, are a politically charged issue. ERIC, brought together by the Pew Charitable Trusts, is meant to play it down the middle. It was started largely with professional election administrators, from both red and blue states.

But the election officials recognized that their headaches often boiled down to a data-handling challenge. Then Mr. Jonas added his technology, which has been developed and refined for decades. It is artificial intelligence software fine-tuned for spotting and resolving identities, whether people or things….(More)”.

A Behavioral Economics Approach to Digitalisation


Paper by Dirk Beerbaum and Julia M. Puaschunder: “A growing body of academic research in the field of behavioural economics, political science and psychology demonstrate how an invisible hand can nudge people’s decisions towards a preferred option. Contrary to the assumptions of the neoclassical economics, supporters of nudging argue that people have problems coping with a complex world, because of their limited knowledge and their restricted rationality. Technological improvement in the age of information has increased the possibilities to control the innocent social media users or penalise private investors and reap the benefits of their existence in hidden persuasion and discrimination. Nudging enables nudgers to plunder the simple uneducated and uninformed citizen and investor, who is neither aware of the nudging strategies nor able to oversee the tactics used by the nudgers (Puaschunder 2017a, b; 2018a, b).

The nudgers are thereby legally protected by democratically assigned positions they hold. The law of motion of the nudging societies holds an unequal concentration of power of those who have access to compiled data and coding rules, relevant for political power and influencing the investor’s decision usefulness (Puaschunder 2017a, b; 2018a, b). This paper takes as a case the “transparency technology XBRL (eXtensible Business Reporting Language)” (Sunstein 2013, 20), which should make data more accessible as well as usable for private investors. It is part of the choice architecture on regulation by governments (Sunstein 2013). However, XBRL is bounded to a taxonomy (Piechocki and Felden 2007).

Considering theoretical literature and field research, a representation issue (Beerbaum, Piechocki and Weber 2017) for principles-based accounting taxonomies exists, which intelligent machines applying Artificial Intelligence (AI) (Mwilu, Prat and Comyn-Wattiau 2015) nudge to facilitate decision usefulness. This paper conceptualizes ethical questions arising from the taxonomy engineering based on machine learning systems: Should the objective of the coding rule be to support or to influence human decision making or rational artificiality? This paper therefore advocates for a democratisation of information, education and transparency about nudges and coding rules (Puaschunder 2017a, b; 2018a, b)…(More)”.

The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines


Public Knowledge: “Today, we’re happy to announce our newest white paper, “The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines,” by Public Knowledge General Counsel Ryan Clough. The paper argues that the rapid and pervasive rise of artificial intelligence risks exploiting the most marginalized and vulnerable in our society. To mitigate these harms, Clough advocates for a new federal authority to help the U.S. government implement fair and equitable AI. Such an authority should provide the rest of the government with the expertise and experience needed to achieve five goals crucial to building ethical AI systems:

  • Boosting sector-specific regulators and confronting overarching policy challenges raised by AI;
  • Protecting public values in government procurement and implementation of AI;
  • Attracting AI practitioners to civil service, and building durable and centralized AI expertise within government;
  • Identifying major gaps in the laws and regulatory frameworks that govern AI; and
  • Coordinating strategies and priorities for international AI governance.

“Any individual can be misjudged and mistreated by artificial intelligence,” Clough explains, “but the record to date indicates that it is significantly more likely to happen to the less powerful, who also have less recourse to do anything about it.” The paper argues that a new federal authority is the best way to meet the profound and novel challenges AI poses for us all….(More)”.

Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies


Zeynep Engin and Philip Treleaven in the Computer Journal:  “The data science technologies of artificial intelligence (AI), Internet of Things (IoT), big data and behavioral/predictive analytics, and blockchain are poised to revolutionize government and create a new generation of GovTech start-ups. The impact from the ‘smartification’ of public services and the national infrastructure will be much more significant in comparison to any other sector given government’s function and importance to every institution and individual.

Potential GovTech systems include Chatbots and intelligent assistants for public engagement, Robo-advisors to support civil servants, real-time management of the national infrastructure using IoT and blockchain, automated compliance/regulation, public records securely stored in blockchain distributed ledgers, online judicial and dispute resolution systems, and laws/statutes encoded as blockchain smart contracts. Government is potentially the major ‘client’ and also ‘public champion’ for these new data technologies. This review paper uses our simple taxonomy of government services to provide an overview of data science automation being deployed by governments world-wide. The goal of this review paper is to encourage the Computer Science community to engage with government to develop these new systems to transform public services and support the work of civil servants….(More)”.

Declaration on Ethics and Data Protection in Artifical Intelligence


Declaration: “…The 40th International Conference of Data Protection and Privacy Commissioners considers that any creation, development and use of artificial intelligence systems shall fully respect human rights, particularly the rights to the protection of personal data and to privacy, as well as human dignity, non-discrimination and fundamental values, and shall provide solutions to allow individuals to maintain control and understanding of artificial intelligence systems.

The Conference therefore endorses the following guiding principles, as its core values to preserve human rights in the development of artificial intelligence:

  1. Artificial intelligence and machine learning technologies should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle, in particular by:
  2. Considering individuals’ reasonable expectations by ensuring that the use of artificial intelligence systems remains consistent with their original purposes, and that the data are used in a way that is not incompatible with the original purpose of their collection,
  3. taking into consideration not only the impact that the use of artificial intelligence may have on the individual, but also the collective impact on groups and on society at large,
  4. ensuring that artificial intelligence systems are developed in a way that facilitates human development and does not obstruct or endanger it, thus recognizing the need for delineation and boundaries on certain uses,…(More)

When AI Misjudgment Is Not an Accident


Douglas Yeung at Scientific American: “The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.

But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them, smuggled inside the data that helps those systems learn. This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.

According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices. Algorithms could be designed to take advantage of seemingly innocuous factors that can be discriminatory. Employing existing techniques, but with biased data or algorithms, could make it easier to hide nefarious intent. Commercial data brokers collect and hold onto all kinds of information, such as online browsing or shopping habits, that could be used in this way.

Biased data could also serve as bait. Corporations could release biased data with the hope competitors would use it to train artificial intelligence algorithms, causing competitors to diminish the quality of their own products and consumer confidence in them.

Algorithmic bias attacks could also be used to more easily advance ideological agendas. If hate groups or political advocacy organizations want to target or exclude people on the basis of race, gender, religion or other characteristics, biased algorithms could give them either the justification or more advanced means to directly do so. Biased data also could come into play in redistricting efforts that entrench racial segregation (“redlining”) or restrict voting rights.

Finally, national security threats from foreign actors could use deliberate bias attacks to destabilize societies by undermining government legitimacy or sharpening public polarization. This would fit naturally with tactics that reportedly seek to exploit ideological divides by creating social media posts and buying online ads designed to inflame racial tensions….(More)”.