Reclaiming Free Speech for Democracy and Human Rights in a Digitally Networked World


Paper by Rebecca MacKinnon: : “…divided into three sections. The first section discusses the relevance of international human rights standards to U.S. internet platforms and universities. The second section identifies three common challenges to universities and internet platforms, with clear policy implications. The third section recommends approaches to internet policy that can better protect human rights and strengthen democracy. The paper concludes with proposals for how universities can contribute to the creation of a more robust digital information ecosystem that protects free speech along with other human rights, and advances social justice.

1) International human rights standards are an essential complement to the First Amendment. While the First Amendment does not apply to how privately owned and operated digital platforms set and enforce rules governing their users’ speech, international human rights standards set forth a clear framework to which companies any other type of private organization can and should be held accountable. Scholars of international law and freedom of expression point out that Article 19 of the International Covenant on Civil and Political Rights encompasses not only free speech, but also the right to access information and to formulate opinions without interference. Notably, this aspect of international human rights law is relevant in addressing the harms caused by disinformation campaigns aided by algorithms and targeted profiling. In protecting freedom of expression, private companies and organizations must also protect and respect other human rights, including privacy, non-discrimination, assembly, the right to political participation, and the basic right to security of person.

2) Three core challenges are common to universities and internet platforms. These common challenges must be addressed in order to protect free speech alongside other fundamental human rights including non-discrimination:

Challenge 1: The pretense of neutrality amplifies bias in an unjust world. In an inequitable and unjust world, “neutral” platforms and institutions will perpetuate and even exacerbate inequities and power imbalances unless they understand and adjust for those inequities and imbalances. This fundamental civil rights concept is better understood by the leaders of universities than by those in charge of social media platforms, which have clear impact on public discourse and civic engagement.

Challenge 2: Rules and enforcement are inadequate without strong leadership and cultural norms. Rules governing speech, and their enforcement, can be ineffective and even counterproductive unless they are accompanied by values-based leadership. Institutional cultures should take into account the context and circumstances of unique situations, individuals, and communities. For rules to have legitimacy, communities that are governed by them must be actively engaged in building a shared culture of responsibility.

Challenge 3: Communities need to be able to shape how and where they enable discourse and conduct learning. Different types of discourse that serve different purposes require differently designed spaces—be they physical or digital. It is important for communities to be able to set their own rules of engagement, and shape their spaces for different types of discourse. Overdependence upon a small number of corporate-controlled platforms does not serve communities well. Online free speech not only will be better served by policies that foster competition and strengthen antitrust law; policies and resources must also support the development of nonprofit, open source, and community-driven digital public infrastructure.

3) A clear and consistent policy environment that supports civil rights objectives and is compatible with human rights standards is essential to ensure that the digital public sphere evolves in a way that genuinely protects free speech and advances social justice. Analysis of twenty different consensus declarations, charters, and principles produced by international coalitions of civil society organizations reveals broad consensus with U.S.-based advocates of civil rights-compatible technology policy….(More)”.

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020)


Foreword of a Report by the Australian Human Rights Commission: “Artificial intelligence (AI) promises better, smarter decision making.

Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and risk, to determine what sorts of people make the ‘best’ customers… In fact, the use cases for AI are limited only by our imagination.

However, using AI carries with it the risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow.

Algorithmic bias is a kind of error associated with the use of AI in decision making, and often results in unfairness. Algorithmic bias can arise in many ways. Sometimes the problem is with the design of the AI-powered decision-making tool itself. Sometimes the problem lies with the data set that was used to train the AI tool, which could replicate or even make worse existing problems, including societal inequality.

Algorithmic bias can cause real harm. It can lead to a person being unfairly treated, or even suffering unlawful discrimination, on the basis of characteristics such as their race, age, sex or disability.

This project started by simulating a typical decision-making process. In this technical paper, we explore how algorithmic bias can ‘creep in’ to AI systems and, most importantly, how this problem can be addressed.

To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation.

Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name.

Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights….(More)”

Four Principles to Make Data Tools Work Better for Kids and Families


Blog by the Annie E. Casey Foundation: “Advanced data analytics are deeply embedded in the operations of public and private institutions and shape the opportunities available to youth and families. Whether these tools benefit or harm communities depends on their design, use and oversight, according to a report from the Annie E. Casey Foundation.

Four Principles to Make Advanced Data Analytics Work for Children and Families examines the growing field of advanced data analytics and offers guidance to steer the use of big data in social programs and policy….

The Foundation report identifies four principles — complete with examples and recommendations — to help steer the growing field of data science in the right direction.

Four Principles for Data Tools

  1. Expand opportunity for children and families. Most established uses of advanced analytics in education, social services and criminal justice focus on problems facing youth and families. Promising uses of advanced analytics go beyond mitigating harm and help to identify so-called odds beaters and new opportunities for youth.
    • Example: The Children’s Data Network at the University of Southern California is helping the state’s departments of education and social services explore why some students succeed despite negative experiences and what protective factors merit more investment.
    • Recommendation: Government and its philanthropic partners need to test if novel data science applications can create new insights and when it’s best to apply them.
       
  2. Provide transparency and evidence. Advanced analytical tools must earn and maintain a social license to operate. The public has a right to know what decisions these tools are informing or automating, how they have been independently validated, and who is accountable for answering and addressing concerns about how they work.
    • Recommendations: Local and state task forces can be excellent laboratories for testing how to engage youth and communities in discussions about advanced analytics applications and the policy frameworks needed to regulate their use. In addition, public and private funders should avoid supporting private algorithms whose design and performance are shielded by trade secrecy claims. Instead, they should fund and promote efforts to develop, evaluate and adapt transparent and effective models.
       
  3. Empower communities. The field of advanced data analytics often treats children and families as clients, patients and consumers. Put to better use, these same tools can help elucidate and reform the systems acting upon children and families. For this shift to occur, institutions must focus analyses and risk assessments on structural barriers to opportunity rather than individual profiles.
    • Recommendation: In debates about the use of data science, greater investment is needed to amplify the voices of youth and their communities.
       
  4. Promote equitable outcomes. Useful advanced analytics tools should promote more equitable outcomes for historically disadvantaged groups. New investments in advanced analytics are only worthwhile if they aim to correct the well-documented bias embedded in existing models.
    • Recommendations: Advanced analytical tools should only be introduced when they reduce the opportunity deficit for disadvantaged groups — a move that will take organizing and advocacy to establish and new policy development to institutionalize. Philanthropy and government also have roles to play in helping communities test and improve tools and examples that already exist….(More)”.

Right/Wrong:How Technology Transforms Our Ethics


Book by Juan Enriquez: “Most people have a strong sense of right and wrong, and they aren’t shy about expressing their opinions. But when we take a polarizing stand on something we regard as an eternal truth, we often forget that ethics evolve over time. Many shifts in the right versus wrong pendulum are driven by advances in technology. Our great-grandparents might be shocked by in vitro fertilization; our great-grandchildren might be shocked by the messiness of pregnancy, childbirth, and unedited genes. In Right/Wrong, Juan Enriquez reflects on what happens to our ethics as technology makes the once unimaginable a commonplace occurrence.

Evolving technology changes ethics. Enriquez points out that, contrary to common wisdom, technology often enables more ethical behaviors. Technology challenges old beliefs and upends institutions that do not grow and change. With wit and compassion, Enriquez takes on a series of technology-influenced ethical dilemmas, from sexual liberation to climate change to the “immortality” of mistakes on social media. (“Facebook, Twitter, Instagram, and Google are electronic tattoos.”) He cautions us to judge those who “should have known better,” given today’s vantage point, with less fury and more compassion. We need a quality often absent in today’s charged debates: humility. Judge those in the past as we hope to be judged in the future….(More)”.

The CARE Principles for Indigenous Data Governance


Paper by Stephanie Russo Carroll et al: “Concerns about secondary use of data and limited opportunities for benefit-sharing have focused attention on the tension that Indigenous communities feel between (1) protecting Indigenous rights and interests in Indigenous data (including traditional knowledges) and (2) supporting open data, machine learning, broad data sharing, and big data initiatives. The International Indigenous Data Sovereignty Interest Group (within the Research Data Alliance) is a network of nation-state based Indigenous data sovereignty networks and individuals that developed the ‘CARE Principles for Indigenous Data Governance’ (Collective Benefit, Authority to Control, Responsibility, and Ethics) in consultation with Indigenous Peoples, scholars, non-profit organizations, and governments. The CARE Principles are people– and purpose-oriented, reflecting the crucial role of data in advancing innovation, governance, and self-determination among Indigenous Peoples. The Principles complement the existing data-centric approach represented in the ‘FAIR Guiding Principles for scientific data management and stewardship’ (Findable, Accessible, Interoperable, Reusable). The CARE Principles build upon earlier work by the Te Mana Raraunga Maori Data Sovereignty Network, US Indigenous Data Sovereignty Network, Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Collective, and numerous Indigenous Peoples, nations, and communities. The goal is that stewards and other users of Indigenous data will ‘Be FAIR and CARE.’ In this first formal publication of the CARE Principles, we articulate their rationale, describe their relation to the FAIR Principles, and present examples of their application….(More)” See also Selected Readings on Indigenous Data Sovereignty.

Data as Property?


Blog by Salomé Viljoen: “Since the proliferation of the World Wide Web in the 1990s, critics of widely used internet communications services have warned of the misuse of personal data. Alongside familiar concerns regarding user privacy and state surveillance, a now-decades-long thread connects a group of theorists who view data—and in particular data about people—as central to what they have termed informational capitalism.1 Critics locate in datafication—the transformation of information into commodity—a particular economic process of value creation that demarcates informational capitalism from its predecessors. Whether these critics take “information” or “capitalism” as the modifier warranting primary concern, datafication, in their analysis, serves a dual role: both a process of production and a form of injustice.

In arguments levied against informational capitalism, the creation, collection, and use of data feature prominently as an unjust way to order productive activity. For instance, in her 2019 blockbuster The Age of Surveillance Capitalism, Shoshanna Zuboff likens our inner lives to a pre-Colonial continent, invaded and strip-mined of data by technology companies seeking profits.2 Elsewhere, Jathan Sadowski identifies data as a distinct form of capital, and accordingly links the imperative to collect data to the perpetual cycle of capital accumulation.3 Julie Cohen, in the Polanyian tradition, traces the “quasi-ownership through enclosure” of data and identifies the processing of personal information in “data refineries” as a fourth factor of production under informational capitalism.4

Critiques breed proposals for reform. Thus, data governance emerges as key terrain on which to discipline firms engaged in datafication and to respond to the injustices of informational capitalism. Scholars, activists, technologists and even presidential candidates have all proposed data governance reforms to address the social ills generated by the technology industry.

These reforms generally come in two varieties. Propertarian reforms diagnose the source of datafication’s injustice in the absence of formal property (or alternatively, labor) rights regulating the process of production. In 2016, inventor of the world wide web Sir Tim Berners-Lee founded Solid, a web decentralization platform, out of his concern over how data extraction fuels the growing power imbalance of the web which, he notes, “has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas.” In response, Solid “aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy.” Solid is one popular project within the blockchain community’s #ownyourdata movement; another is Radical Markets, a suite of proposals from Glen Weyl (an economist and researcher at Microsoft) that includes developing a labor market for data. Like Solid, Weyl’s project is in part a response to inequality: it aims to disrupt the digital economy’s “technofeudalism,” where the unremunerated fruits of data laborers’ toil help drive the inequality of the technology economy writ large.5 Progressive politicians from Andrew Yang to Alexandria Ocasio-Cortez have similarly advanced proposals to reform the information economy, proposing variations on the theme of user-ownership over their personal data.

The second type of reforms, which I call dignitarian, take a further step beyond asserting rights to data-as-property, and resist data’s commodification altogether, drawing on a framework of civil and human rights to advocate for increased protections. Proposed reforms along these lines grant individuals meaningful capacity to say no to forms of data collection they disagree with, to determine the fate of data collected about them, and to grant them rights against data about them being used in ways that violate their interests….(More)”.

Civil Liberties in Times of Crisis


Paper by Marcella Alsan, Luca Braghieri, Sarah Eichmeyer, Minjeong Joyce Kim, Stefanie Stantcheva, and David Y. Yang: “The respect for and protection of civil liberties are one of the fundamental roles of the state, and many consider civil liberties as sacred and “nontradable.” Using cross-country representative surveys that cover 15 countries and over 370,000 respondents, we study whether and the extent to which citizens are willing to trade off civil liberties during the COVID-19 pandemic, one of the largest crises in recent history. We find four main results. First, many around the world reveal a clear willingness to trade off civil liberties for improved public health conditions. Second, consistent across countries, exposure to health risks is associated with citizens’ greater willingness to trade off civil liberties, though individuals who are more economically disadvantaged are less willing to do so. Third, attitudes concerning such trade-offs are elastic to information. Fourth, we document a gradual decline and then plateau in citizens’ overall willingness to sacrifice rights and freedom as the pandemic progresses, though the underlying correlation between individuals’ worry about health and their attitudes over the trade-offs has been remarkably constant. Our results suggest that citizens do not view civil liberties as sacred values; rather, they are willing to trade off civil liberties more or less readily, at least in the short-run, depending on their own circumstances and information….(More)”.

Responsible group data for children


Issue Brief by Andrew Young: “Understanding how and why group data is collected and what can be done to protect children’s rights…While the data protection field largely focuses on individual data harms, it is a focus that obfuscates and exacerbates the risks of data that could put groups of people at risk, such as the residents of a particular village, rather than individuals.

Though not well-represented in the current responsible data literature and policy domains writ large, the challenges group data poses are immense. Moreover, the unique and amplified group data risks facing children are even less scrutinized and understood.

To achieve Responsible Data for Children (RD4C) and ensure effective and legitimate governance of children’s data, government policymakers, data practitioners, and institutional decision makers need to ensure children’s group data are a core consideration in all relevant policies, procedures, and practices….(More)”. (See also Responsible Data for Children).

UK passport photo checker shows bias against dark-skinned women


Maryam Ahmed at BBC News: “Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation.

One black student said she was wrongly told her mouth looked open each time she uploaded five different photos to the government website.

This shows how “systemic racism” can spread, Elaine Owusu said.

The Home Office said the tool helped users get their passports more quickly.

“The indicative check [helps] our customers to submit a photo that is right the first time,” said a spokeswoman.

“Over nine million people have used this service and our systems are improving.

“We will continue to develop and evaluate our systems with the objective of making applying for a passport as simple as possible for all.”

Skin colour

The passport application website uses an automated check to detect poor quality photos which do not meet Home Office rules. These include having a neutral expression, a closed mouth and looking straight at the camera.

BBC research found this check to be less accurate on darker-skinned people.

More than 1,000 photographs of politicians from across the world were fed into the online checker.

The results indicated:

  • Dark-skinned women are told their photos are poor quality 22% of the time, while the figure for light-skinned women is 14%
  • Dark-skinned men are told their photos are poor quality 15% of the time, while the figure for light-skinned men is 9%

Photos of women with the darkest skin were four times more likely to be graded poor quality, than women with the lightest skin….(More)”.

The secret to building a smart city that’s antiracist


Article by Eliza McCullough: “….Instead of a smart city model that extracts from, surveils, and displaces poor people of color, we need a democratic model that allows community members to decide how technological infrastructure operates and to ensure the equitable distribution of benefits. Doing so will allow us to create cities defined by inclusion, shared ownership, and shared prosperity.

In 2016, Barcelona, for example, launched its Digital City Plan, which aims to empower residents with control of technology used in their communities. The document incorporates over 8,000 proposals from residents and includes plans for open source software, government ownership of all ICT infrastructure, and a pilot platform to help citizens maintain control over their personal data. As a result, the city now has free applications that allow residents to easily propose city development ideas, actively participate in city council meetings, and choose how their data is shared.

In the U.S., we need a framework for tech sovereignty that incorporates a racial equity approach: In a racist society, race neutrality facilitates continued exclusion and exploitation of people of color. Digital Justice Lab in Toronto illustrates one critical element of this kind of approach: access to information. In 2018, the organization gave community groups a series of grants to hold public events that shared resources and information about digital rights. Their collaborative approach intentionally focuses on the specific needs of people of color and other marginalized groups.

The turn toward intensified surveillance infrastructure in the midst of the coronavirus outbreak makes the need to adopt such practices all the more crucial. Democratic tech models that uplift marginalized populations provide us the chance to build a city that is just and open to everyone….(More)”.