Interventions to mitigate the racially discriminatory impacts of emerging tech including AI


Joint Civil Society Statement: “As widespread recent protests have highlighted, racial inequality remains an urgent and devastating issue around the world, and this is as true in the context of technology as it is everywhere else. In fact, it may be more so, as algorithmic technologies based on big data are deployed at previously unimaginable scale, reproducing the discriminatory systems that build and govern them.

The undersigned organizations welcome the publication of the report “Racial discrimination and emerging digital technologies: a human rights analysis,” by Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume, and wish to underscore the importance and timeliness of a number of the recommendations made therein:

  1. Technologies that have had or will have significant racially discriminatory impacts should be banned outright.
    While incremental regulatory approaches may be appropriate in some contexts, where a technology is demonstrably likely to cause racially discriminatory harm, it should not be deployed until that harm can be prevented. Moreover, certain technologies may always have disparate racial impacts, no matter how much their accuracy can be improved. In the present moment, racially discriminatory technologies include facial and affect recognition technology and so-called predictive analytics. We support Special Rapporteur Achiume’s call for mandatory human rights impact assessments as a prerequisite for the adoption of new technologies. We also believe that where such assessments reveal that a technology has a high likelihood of deleterious racially disparate impacts, states should prevent its use through a ban or moratorium. We join the Special Rapporteur in welcoming recent municipal bans, for example, on the use of facial recognition technology, and encourage national governments to adopt similar policies.  Correspondingly, we reiterate our support for states’ imposition of an immediate moratorium on the trade and use of privately developed surveillance tools until such time as states enact appropriate safeguards, and congratulate Special Rapporteur Achiume on joining that call.
  2. Gender mainstreaming and representation along racial, national and other intersecting identities requires radical improvement at all levels of the tech sector.
  3. Technologists cannot solve political, social, and economic problems without the input of domain experts and those personally impacted.
  4. Access to technology is as urgent an issue of racial discrimination as inequity in the design of technologies themselves.
  5. Representative and disaggregated data is a necessary, if not sufficient, condition for racial equity in emerging digital technologies, but it must be collected and managed equitably as well.
  6. States as well as corporations must provide remedies for racial discrimination, including reparations.… (More)”.

The Misinformation Edition


On-Line Exhibition by the Glass Room: “…In this exhibition – aimed at young people as well as adults – we explore how social media and the web have changed the way we read information and react to it. Learn why finding “fake news” is not as easy as it sounds, and how the term “fake news” is as much a problem as the news it describes. Dive into the world of deep fakes, which are now so realistic that they are virtually impossible to detect. And find out why social media platforms are designed to keep us hooked, and how they can be used to change our minds. You can also read our free Data Detox Kit, which reveals how to tell facts from fiction and why it benefits everyone around us when we take a little more care about what we share…(More)”.

EXPLORE OUR ONLINE EXHIBITION

The Atlas of Surveillance


Electronic Frontier Foundation: “Law enforcement surveillance isn’t always secret. These technologies can be discovered in news articles and government meeting agendas, in company press releases and social media posts. It just hasn’t been aggregated before.

That’s the starting point for the Atlas of Surveillance, a collaborative effort between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism. Through a combination of crowdsourcing and data journalism, we are creating the largest-ever repository of information on which law enforcement agencies are using what surveillance technologies. The aim is to generate a resource for journalists, academics, and, most importantly, members of the public to check what’s been purchased locally and how technologies are spreading across the country.

We specifically focused on the most pervasive technologies, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, and gunshot detection. Although we have amassed more than 5,000 datapoints in 3,000 jurisdictions, our research only reveals the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies….(More)”.

Four Principles for Integrating AI & Good Governance


Oxford Commission on AI and Good Governance: “Many governments, public agencies and institutions already employ AI in providing public services, the distribution of resources and the delivery of governance goods. In the public sector, AI-enabled governance may afford new efficiencies that have the potential to transform a wide array of public service tasks.
But short-sighted design and use of AI can create new problems, entrench existing inequalities, and calcify and ultimately undermine government organizations.

Frameworks for the procurement and implementation of AI in public service have widely remained undeveloped. Frequently, existing regulations and national laws are no longer fit for purpose to ensure
good behaviour (of either AI or private suppliers) and are ill-equipped to provide guidance on the democratic use of AI.
As technology evolves rapidly, we need rules to guide the use of AI in ways that safeguard democratic values. Under what conditions can AI be put into service for good governance?

We offer a framework for integrating AI with good governance. We believe that with dedicated attention and evidence-based policy research, it should be possible to overcome the combined technical and organizational challenges of successfully integrating AI with good governance. Doing so requires working towards:


Inclusive Design: issues around discrimination and bias of AI in relation to inadequate data sets, exclusion of minorities and under-represented
groups, and the lack of diversity in design.
Informed Procurement: issues around the acquisition and development in relation to due diligence, design and usability specifications and the assessment of risks and benefits.
Purposeful Implementation: issues around the use of AI in relation to interoperability, training needs for public servants, and integration with decision-making processes.
Persistent Accountability: issues around the accountability and transparency of AI in relation to ‘black box’ algorithms, the interpretability and explainability of systems, monitoring and auditing…(More)”

Tackling the misinformation epidemic with “In Event of Moon Disaster”


MIT Open Learning: “Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be….

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources….(More)”.

Digital inequalities 3.0: Emergent inequalities in the information age


Essay by Laura Robinson et al in FirstMonday: “Marking the 25th anniversary of the “digital divide,” we continue our metaphor of the digital inequality stack by mapping out the rapidly evolving nature of digital inequality using a broad lens. We tackle complex, and often unseen, inequalities spawned by the platform economy, automation, big data, algorithms, cybercrime, cybersafety, gaming, emotional well-being, assistive technologies, civic engagement, and mobility. These inequalities are woven throughout the digital inequality stack in many ways including differentiated access, use, consumption, literacies, skills, and production. While many users are competent prosumers who nimbly work within different layers of the stack, very few individuals are “full stack engineers” able to create or recreate digital devices, networks, and software platforms as pure producers. This new frontier of digital inequalities further differentiates digitally skilled creators from mere users. Therefore, we document emergent forms of inequality that radically diminish individuals’ agency and augment the power of technology creators, big tech, and other already powerful social actors whose dominance is increasing….(More)”

Adolescent Mental Health: Using A Participatory Mapping Methodology to Identify Key Priorities for Data Collaboration


Blog by Alexandra Shaw, Andrew J. Zahuranec, Andrew Young, Stefaan G. Verhulst, Jennifer Requejo, Liliana Carvajal: “Adolescence is a unique stage of life. The brain undergoes rapid development; individuals face new experiences, relationships, and environments. These events can be exciting, but they can also be a source of instability and hardship. Half of all mental conditions manifest by early adolescence and between 10 and 20 percent of all children and adolescents report mental health conditions. Despite the increased risks and concerns for adolescents’ well-being, there remain significant gaps in availability of data at the country level for policymakers to address these issues.

In June, The GovLab partnered with colleagues at UNICEF’s Health and HIV team in the Division of Data, Analysis, Planning & Monitoring and the Data for Children Collaborative — a collaboration between UNICEF, the Scottish Government, and the University of Edinburgh — to design and apply a new methodology of participatory mapping and prioritization of key topics and issues associated with adolescent mental health that could be addressed through enhanced data collaboration….

The event led to three main takeaways. First, the topic mapping allows participants to deliberate and prioritize the various aspects of adolescent mental health in a more holistic manner. Unlike the “blind men and the elephant” parable, a topic map allows the participants to see and discuss  the interrelated parts of the topic, including those which they might be less familiar with.

Second, the workshops demonstrated the importance of tapping into distributed expertise via participatory processes. While the topic map provided a starting point, the inclusion of various experts allowed the findings of the document to be reviewed in a rapid, legitimate fashion. The diverse inputs helped ensure the individual aspects could be prioritized without a perspective being ignored.

Lastly, the approach showed the importance of data initiatives being driven and validated by those individuals representing the demand. By soliciting the input of those who would actually use the data, the methodology ensured data initiatives focused on the aspects thought to be most relevant and of greatest importance….(More)”

Addressing trust in public sector data use


Centre for Data Ethics and Innovation: “Data sharing is fundamental to effective government and the running of public services. But it is not an end in itself. Data needs to be shared to drive improvements in service delivery and benefit citizens. For this to happen sustainably and effectively, public trust in the way data is shared and used is vital. Without such trust, the government and wider public sector risks losing society’s consent, setting back innovation as well as the smooth running of public services. Maximising the benefits of data driven technology therefore requires a solid foundation of societal approval.

AI and data driven technology offer extraordinary potential to improve decision making and service delivery in the public sector – from improved diagnostics to more efficient infrastructure and personalised public services. This makes effective use of data more important than it has ever been, and requires a step-change in the way data is shared and used. Yet sharing more data also poses risks and challenges to current governance arrangements.

The only way to build trust sustainably is to operate in a trustworthy way. Without adequate safeguards the collection and use of personal data risks changing power relationships between the citizen and the state. Insights derived by big data and the matching of different data sets can also undermine individual privacy or personal autonomy. Trade-offs are required which reflect democratic values, wider public acceptability and a shared vision of a data driven society. CDEI has a key role to play in exploring this challenge and setting out how it can be addressed. This report identifies barriers to data sharing, but focuses on building and sustaining the public trust which is vital if society is to maximise the benefits of data driven technology.

There are many areas where the sharing of anonymised and identifiable personal data by the public sector already improves services, prevents harm, and benefits the public. Over the last 20 years, different governments have adopted various measures to increase data sharing, including creating new legal sharing gateways. However, despite efforts to increase the amount of data sharing across the government, and significant successes in areas like open data, data sharing continues to be challenging and resource-intensive. This report identifies a range of technical, legal and cultural barriers that can inhibit data sharing.

Barriers to data sharing in the public sector

Technical barriers include limited adoption of common data standards and inconsistent security requirements across the public sector. Such inconsistency can prevent data sharing, or increase the cost and time for organisations to finalise data sharing agreements.

While there are often pre-existing legal gateways for data sharing, underpinned by data protection legislation, there is still a large amount of legal confusion on the part of public sector bodies wishing to share data which can cause them to start from scratch when determining legality and commit significant resources to legal advice. It is not unusual for the development of data sharing agreements to delay the projects for which the data is intended. While the legal scrutiny of data sharing arrangements is an important part of governance, improving the efficiency of these processes – without sacrificing their rigour – would allow data to be shared more quickly and at less expense.

Even when legal, the permissive nature of many legal gateways means significant cultural and organisational barriers to data sharing remain. Individual departments and agencies decide whether or not to share the data they hold and may be overly risk averse. Data sharing may not be prioritised by a department if it would require them to bear costs to deliver benefits that accrue elsewhere (i.e. to those gaining access to the data). Departments sharing data may need to invest significant resources to do so, as well as considering potential reputational or legal risks. This may hold up progress towards finding common agreement on data sharing. When there is an absence of incentives, even relatively small obstacles may mean data sharing is not deemed worthwhile by those who hold the data – despite the fact that other parts of the public sector might benefit significantly….(More)”.

Open Government Playbook


Gov.UK: “The document provides guidance and advice to help policy officials follow open government principles when carrying out their work…

The Playbook has been developed as a response to the growing demand from policymakers, communications, and digital professionals to integrate the principles of open government in their roles. The content of the Playbook was drafted using existing resources (see the ‘further reading’ section), and was consulted with open government experts from Involve and Open Government Partnership….(More)”.

Ethical and Legal Aspects of Open Data Affecting Farmers


Report by Foteini Zampati et al: “Open Data offers a great potential for innovations from which the agricultural sector can benefit decisively due to a wide range of possibilities for further use. However, there are many inter-linked issues in the whole data value chain that affect the ability of farmers, especially the poorest and most vulnerable, to access, use and harness the benefits of data and data-driven technologies.

There are technical challenges and ethical and legal challenges as well. Of all these challenges, the ethical and legal aspects related to accessing and using data by the farmers and sharing farmers’ data have been less explored.

We aimed to identify gaps and highlight the often-complex legal issues related to open data in the areas of law (e.g. data ownership, data rights) policies, codes of conduct, data protection, intellectual property rights, licensing contracts and personal privacy.

This report is an output of the Kampala INSPIRE Hackathon 2020. The Hackathon addressed key topics identified by the IST-Africa 2020 conference, such as: Agriculture, environmental sustainability, collaborative open innovation, and ICT-enabled entrepreneurship.

The goal of the event was to continue to build on the efforts of the 2019 Nairobi INSPIRE Hackathon, further strengthening relationships between various EU projects and African communities. It was a successful event, with more than 200 participants representing 26 African countries. The INSPIRE Hackathons are not a competition, rather the main focus is building relationships, making rapid developments, and collecting ideas for future research and innovation….(More)”.