Public Attitudes Toward Computer Algorithms


Aaron Smith at the Pew Research Center: “Algorithms are all around us, utilizing massive stores of data and complex analytics to make decisions with often significant impacts on humans. They recommend books and movies for us to read and watch, surface news stories they think we might find relevant, estimate the likelihood that a tumor is cancerous and predict whether someone might be a criminal or a worthwhile credit risk. But despite the growing presence of algorithms in many aspects of daily life, a Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations.

This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do….

The following are among the major findings.

The public expresses broad concerns about the fairness and acceptability of using computers for decision-making in situations with important real-world consequences

Majorities of Americans find it unacceptable to use algorithms to make decisions with real-world consequences for humans

By and large, the public views these examples of algorithmic decision-making as unfair to the people the computer-based systems are evaluating. Most notably, only around one-third of Americans think that the video job interview and personal finance score algorithms would be fair to job applicants and consumers. When asked directly whether they think the use of these algorithms is acceptable, a majority of the public says that they are not acceptable. Two-thirds of Americans (68%) find the personal finance score algorithm unacceptable, and 67% say the computer-aided video job analysis algorithm is unacceptable….

Attitudes toward algorithmic decision-making can depend heavily on context

Despite the consistencies in some of these responses, the survey also highlights the ways in which Americans’ attitudes toward algorithmic decision-making can depend heavily on the context of those decisions and the characteristics of the people who might be affected….

When it comes to the algorithms that underpin the social media environment, users’ comfort level with sharing their personal information also depends heavily on how and why their data are being used. A 75% majority of social media users say they would be comfortable sharing their data with those sites if it were used to recommend events they might like to attend. But that share falls to just 37% if their data are being used to deliver messages from political campaigns.

Across age groups, social media users are comfortable with their data being used to recommend events - but wary of that data being used for political messaging

In other instances, different types of users offer divergent views about the collection and use of their personal data. For instance, about two-thirds of social media users younger than 50 find it acceptable for social media platforms to use their personal data to recommend connecting with people they might want to know. But that view is shared by fewer than half of users ages 65 and older….(More)”.

Data-Driven Development


Report by the World Bank: “…Decisions based on data can greatly improve people’s lives. Data can uncover patterns, unexpected relationships and market trends, making it possible to address previously intractable problems and leverage hidden opportunities. For example, tracking genes associated with certain types of cancer to improve treatment, or using commuter travel patterns to devise public transportation that is affordable and accessible for users, as well as profitable for operators.

Data is clearly a precious commodity, and the report points out that people should have greater control over the use of their personal data. Broadly speaking, there are three possible answers to the question “Who controls our data?”: firms, governments, or users. No global consensus yet exists on the extent to which private firms that mine data about individuals should be free to use the data for profit and to improve services.

User’s willingness to share data in return for benefits and free services – such as virtually unrestricted use of social media platforms – varies widely by country. In addition to that, early internet adopters, who grew up with the internet and are now age 30–40, are the most willing to share (GfK 2017).

Are you willing to share your data? (source: GfK 2017)

Image

On the other hand, data can worsen the digital divide – the data poor, who leave no digital trail because they have limited access, are most at risk from exclusion from services, opportunities and rights, as are those who lack a digital ID, for instance.

Firms and Data

For private sector firms, particularly those in developing countries, the report suggests how they might expand their markets and improve their competitive edge. Companies are already developing new markets and making profits by analyzing data to better understand their customers. This is transforming conventional business models. For years, telecommunications has been funded by users paying for phone calls. Today, advertisers pay for users’ data and attention are funding the internet, social media, and other platforms, such as apps, reversing the value flow.

Governments and Data

For governments and development professionals, the report provides guidance on how they might use data more creatively to help tackle key global challenges, such as eliminating extreme poverty, promoting shared prosperity, or mitigating the effects of climate change. The first step is developing appropriate guidelines for data sharing and use, and for anonymizing personal data. Governments are already beginning to use the huge quantities of data they hold to enhance service delivery, though they still have far to go to catch up with the commercial giants, the report finds.

Data for Development

The Information and Communications for Development report analyses how the data revolution is changing the behavior of governments, individuals, and firms and how these changes affect economic, social, and cultural development. This is a topic of growing importance that cannot be ignored, and the report aims to stimulate wider debate on the unique challenges and opportunities of data for development. It will be useful for policy makers, but also for anyone concerned about how their personal data is used and how the data revolution might affect their future job prospects….(More)”.

Behavioural Insights Toolkit and Ethical Guidelines for Policy Makers


Consultation Document by the OECD: “BASIC (Behaviour, Analysis, Strategies, Intervention, and Change) is an overarching framework for applying behavioural insights to public policy from the beginning to the end of the policy cycle. It is built on five stages that guides the application of behavioural insights and is a repository of best practices, proof of concepts and methodological standards for behavioural insights practitioners and policymakers who have become interested in applying behavioural insights to public policy. Crucially, BASIC offers an approach to problem scoping that can be of relevance for any policymaker and practitioner when addressing a policy problem, be it behavioural or systemic.

The document provides an overview of the rationale, applicability and key tenets of BASIC. It walks practitioners through the five BASIC sequential stages with examples, and presents detailed ethical guidelines to be considered at each stage.

It has been developed by the OECD in partnership with Dr Pelle Guldborg Hansen of Roskilde University, Denmark. This version benefitted from feedback provided by the participants in the Western Cape Government – OECD Behavioural Insights Conference held in Cape Town on 27-28 September 2018….(More)”

Global Indicators of Regulatory Governance


The World Bank: “The Global Indicators of Regulatory Governance project is an initiative of the World Bank’s Global Indicators Group, which produces a range of datasets and benchmarking products on regulations and business activity around the world. These datasets include Doing BusinessEnterprise SurveysEnabling the Business of Agriculture and Women, Business and the Law.

The Global Indicators of Regulatory Governance project explores how governments interact with the public when shaping regulations that affect their business community. Concerned stakeholders could be professional associations, civic groups or foreign investors. The project charts how interested groups learn about new regulations being considered, and the extent to which they are able to engage with officials on the content. It also measures whether or not governments assess the possible impact of new regulations in their countries (including economic, social and environmental considerations) and whether those calculations form part of the public consultation. Finally, Global Indicators of Regulatory Governance capture two additional components of a predictable regulatory environment: the ability of stakeholders to challenge regulations, and the ability of people to access all the laws and regulations currently in force in one, consolidated place.

http://rulemaking.worldbank.org/en/about-usThe project grew out of an increasing recognition of the importance of transparency and accountability in government actions. Citizen access to the government rulemaking process is central for the creation of a business environment in which investors make long-range plans and investments. Greater levels of consultation are also associated with a higher quality of regulation….(More)”

Declaration of Cities Coalition for Digital Rights


New York City, Barcelona and Amsterdam: “We, the undersigned cities, formally come together to form the Cities Coalition for Digital Rights, to protect and uphold human rights on the internet at the local and global level.

The internet has become inseparable from our daily lives. Yet, every day, there are new cases of digital rights abuse, misuse and misinformation and concentration of power around the world: freedom of expression being censored; personal information, including our movements and communications, monitored, being shared and sold without consent; ‘black box’ algorithms being used to make unaccountable decisions; social media being used as a tool of harassment and hate speech; and democratic processes and public opinion being undermined.

As cities, the closest democratic institutions to the people, we are committed to eliminating impediments to harnessing technological opportunities that improve the lives of our constituents, and to providing trustworthy and secure digital services and infrastructures that support our communities. We strongly believe that human rights principles such as privacy, freedom of expression, and democracy must be incorporated by design into digital platforms starting with locally-controlled digital infrastructures and services.

As a coalition, and with the support of the United Nations Human Settlements Program (UN-Habitat), we will share best practices, learn from each other’s challenges and successes, and coordinate common initiatives and actions. Inspired by the Internet Rights and Principles Coalition (IRPC), the work of 300 international stakeholders over the past ten years, we are committed to the following five evolving principles:

01.Universal and equal access to the internet, and digital literacy

02.Privacy, data protection and security

03.Transparency, accountability, and non-discrimination of data, content and algorithms

04.Participatory Democracy, diversity and inclusion

05.Open and ethical digital service standards”

What’s inside the black box of digital innovation?


George Atalla at Ernst and Young: “Analysis of the success or failure of government digital transformation projects tends to focus on the technology that has been introduced. Seldom discussed is the role played by organizational culture and by a government’s willingness to embrace new approaches and working practices. And yet factors such as an ability to transcend bureaucratic working styles and collaborate with external partners are just as vital to success as deploying the right IT…

The study, Inside the Black Box: Journey Mapping Digital Innovation in Government, used a range of qualitative research tools including rich pictures, journey maps and self-reporting questionnaires to tease out individual characteristics of team members, team sentiment, organizational governance and the role played by cultural factors. The approach was unique in that it captured the nuances of the process of digital innovation, rather than merely measuring inputs and outputs.

The aim of the study was to look inside the “black box” of digital transformation to find out what really goes on within the teams responsible for delivery. In every case, the implementation journey involved ups and downs, advances and setbacks, but there were always valuable lessons to learn. We have extracted the six key insights for governments, outlined below, to provide guidance for government and public sector leaders who are embarking on their own innovation journey…(More)”.

The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines


Public Knowledge: “Today, we’re happy to announce our newest white paper, “The Inevitability of AI Law & Policy: Preparing Government for the Era of Autonomous Machines,” by Public Knowledge General Counsel Ryan Clough. The paper argues that the rapid and pervasive rise of artificial intelligence risks exploiting the most marginalized and vulnerable in our society. To mitigate these harms, Clough advocates for a new federal authority to help the U.S. government implement fair and equitable AI. Such an authority should provide the rest of the government with the expertise and experience needed to achieve five goals crucial to building ethical AI systems:

  • Boosting sector-specific regulators and confronting overarching policy challenges raised by AI;
  • Protecting public values in government procurement and implementation of AI;
  • Attracting AI practitioners to civil service, and building durable and centralized AI expertise within government;
  • Identifying major gaps in the laws and regulatory frameworks that govern AI; and
  • Coordinating strategies and priorities for international AI governance.

“Any individual can be misjudged and mistreated by artificial intelligence,” Clough explains, “but the record to date indicates that it is significantly more likely to happen to the less powerful, who also have less recourse to do anything about it.” The paper argues that a new federal authority is the best way to meet the profound and novel challenges AI poses for us all….(More)”.

Whither large International Non-Governmental Organisations?


Working Paper by Penny Lawrence: “Large international non-government organisations (INGOs) seem to be in an existential crisis in their role in the fight for social justice. Many, such as Save the Children or Oxfam, have become big well-known brands with compliance expectations similar to big businesses. Yet the public still imagine them to be run by volunteers. Their context is changing so fast, and so unpredictably, that they are struggling to keep up. It is a time of extraordinary disruptive change including the digital transformation, changing societal norms and engagement expectations and political upheaval and challenge. Fifteen years ago the political centre-ground in the UK seemed firm, with expanding space for civil society organisations to operate. Space for civil society voice now seems more threatened and challenged (Kenny 2015).

There has been a decline in trust in large charities in particular, partly as a result of their own complacency, acting as if the argument for aid has been won. Partly as a result of questioned practices e.g. the fundraising scandal of 2016/17 (where repeated mail drops to individuals requesting funds caused public backlash) and the safeguarding scandal of 2018 (where historic cases of sexual abuse by INGO staff, including Oxfam, were revisited by media in the wake of the #me too movement). This is also partly as a result of political challenge on INGOs’ advocacy and influencing role, their bias and their voice:

‘Some government ministers regard the charity sector with suspicion because it largely employs senior people with a left-wing perspective on life and because of other unfair criticisms of government it means there is regularly a tension between big charities and the conservative party’ Richard Wilson (Former Minister for Civil Society) 2018

On the other hand many feel that charities who have taken significant contracts to deliver services for the state have forfeited their independent voice and lost their way:

‘The voluntary sector risks declining over the next ten years into a mere instrument of a shrunken state, voiceless and toothless, unless it seizes the agenda and creates its own vision.’ Professor Nicholas Deakin 2014

It’s a tough context to be leading an INGO through, but INGOs have appeared ill prepared and slow to respond to the threats and opportunities, not realising how much they may need to change to respond to the fast evolving context and expectations. Large INGOs spend most of their energy exploiting present grant and contract business models, rather than exploring the opportunities to overcome poverty offered by such disruptive change. Their size and structures do not enable agility. They are too internally focused and self-referencing at a time when the world around them is changing so fast, and when political sands have shifted. Focussing on the internationalisation of structures and decision-making means large INGOs are ‘defeated by our own complexity’, as one INGO interviewee put it.

The purpose of this paper is to stimulate thinking amongst large INGOs at a time of such extraordinary disruptive change. The paper explores options for large INGOs, in terms of function and structure. After outlining large INGOs’ history, changing context, value and current thinking, it explores learning from others outside the development sector before suggesting the emerging options. It reflects on what’s encouraging and what’s stopping change and offers possible choices and pathways forwards….(More)”.

Regulatory Technology – Replacing Law with Computer Code


LSE Legal Studies Working Paper by Eva Micheler and Anna Whaley: “Recently both the Bank of England and the Financial Conduct Authority have carried out experiments using new digital technology for regulatory purposes. The idea is to replace rules written in natural legal language with computer code and to use artificial intelligence for regulatory purposes.

This new way of designing public law is in line with the government’s vision for the UK to become a global leader in digital technology. It is also reflected in the FCA’s business plan.

The article reviews the technology and the advantages and disadvantages of combining the technology with regulatory law. It then informs the discussion from a broader public law perspective. It analyses regulatory technology through criteria developed in the mainstream regulatory discourse. It contributes to that discourse by anticipating problems that will arise as the technology evolves. In addition, the hope is to assist the government in avoiding mistakes that have occurred in the past and creating a better system from the start…(More)”.

On the Rise of FinTechs – Credit Scoring using Digital Footprints


NBER Working Paper by Tobias Berg, Valentin Burg, Ana Gombović and Manju Puri: “We analyze the information content of the digital footprint – information that people leave online simply by accessing or registering on a website – for predicting consumer default. Using more than 250,000 observations, we show that even simple, easily accessible variables from the digital footprint equal or exceed the information content of credit bureau (FICO) scores. Furthermore, the discriminatory power for unscorable customers is very similar to that of scorable customers. Our results have potentially wide implications for financial intermediaries’ business models, for access to credit for the unbanked, and for the behavior of consumers, firms, and regulators in the digital sphere….(More)”.