Four Principles for Integrating AI & Good Governance


Oxford Commission on AI and Good Governance: “Many governments, public agencies and institutions already employ AI in providing public services, the distribution of resources and the delivery of governance goods. In the public sector, AI-enabled governance may afford new efficiencies that have the potential to transform a wide array of public service tasks.
But short-sighted design and use of AI can create new problems, entrench existing inequalities, and calcify and ultimately undermine government organizations.

Frameworks for the procurement and implementation of AI in public service have widely remained undeveloped. Frequently, existing regulations and national laws are no longer fit for purpose to ensure
good behaviour (of either AI or private suppliers) and are ill-equipped to provide guidance on the democratic use of AI.
As technology evolves rapidly, we need rules to guide the use of AI in ways that safeguard democratic values. Under what conditions can AI be put into service for good governance?

We offer a framework for integrating AI with good governance. We believe that with dedicated attention and evidence-based policy research, it should be possible to overcome the combined technical and organizational challenges of successfully integrating AI with good governance. Doing so requires working towards:


Inclusive Design: issues around discrimination and bias of AI in relation to inadequate data sets, exclusion of minorities and under-represented
groups, and the lack of diversity in design.
Informed Procurement: issues around the acquisition and development in relation to due diligence, design and usability specifications and the assessment of risks and benefits.
Purposeful Implementation: issues around the use of AI in relation to interoperability, training needs for public servants, and integration with decision-making processes.
Persistent Accountability: issues around the accountability and transparency of AI in relation to ‘black box’ algorithms, the interpretability and explainability of systems, monitoring and auditing…(More)”

Tackling the misinformation epidemic with “In Event of Moon Disaster”


MIT Open Learning: “Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be….

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources….(More)”.

Digital inequalities 3.0: Emergent inequalities in the information age


Essay by Laura Robinson et al in FirstMonday: “Marking the 25th anniversary of the “digital divide,” we continue our metaphor of the digital inequality stack by mapping out the rapidly evolving nature of digital inequality using a broad lens. We tackle complex, and often unseen, inequalities spawned by the platform economy, automation, big data, algorithms, cybercrime, cybersafety, gaming, emotional well-being, assistive technologies, civic engagement, and mobility. These inequalities are woven throughout the digital inequality stack in many ways including differentiated access, use, consumption, literacies, skills, and production. While many users are competent prosumers who nimbly work within different layers of the stack, very few individuals are “full stack engineers” able to create or recreate digital devices, networks, and software platforms as pure producers. This new frontier of digital inequalities further differentiates digitally skilled creators from mere users. Therefore, we document emergent forms of inequality that radically diminish individuals’ agency and augment the power of technology creators, big tech, and other already powerful social actors whose dominance is increasing….(More)”

Adolescent Mental Health: Using A Participatory Mapping Methodology to Identify Key Priorities for Data Collaboration


Blog by Alexandra Shaw, Andrew J. Zahuranec, Andrew Young, Stefaan G. Verhulst, Jennifer Requejo, Liliana Carvajal: “Adolescence is a unique stage of life. The brain undergoes rapid development; individuals face new experiences, relationships, and environments. These events can be exciting, but they can also be a source of instability and hardship. Half of all mental conditions manifest by early adolescence and between 10 and 20 percent of all children and adolescents report mental health conditions. Despite the increased risks and concerns for adolescents’ well-being, there remain significant gaps in availability of data at the country level for policymakers to address these issues.

In June, The GovLab partnered with colleagues at UNICEF’s Health and HIV team in the Division of Data, Analysis, Planning & Monitoring and the Data for Children Collaborative — a collaboration between UNICEF, the Scottish Government, and the University of Edinburgh — to design and apply a new methodology of participatory mapping and prioritization of key topics and issues associated with adolescent mental health that could be addressed through enhanced data collaboration….

The event led to three main takeaways. First, the topic mapping allows participants to deliberate and prioritize the various aspects of adolescent mental health in a more holistic manner. Unlike the “blind men and the elephant” parable, a topic map allows the participants to see and discuss  the interrelated parts of the topic, including those which they might be less familiar with.

Second, the workshops demonstrated the importance of tapping into distributed expertise via participatory processes. While the topic map provided a starting point, the inclusion of various experts allowed the findings of the document to be reviewed in a rapid, legitimate fashion. The diverse inputs helped ensure the individual aspects could be prioritized without a perspective being ignored.

Lastly, the approach showed the importance of data initiatives being driven and validated by those individuals representing the demand. By soliciting the input of those who would actually use the data, the methodology ensured data initiatives focused on the aspects thought to be most relevant and of greatest importance….(More)”

Addressing trust in public sector data use


Centre for Data Ethics and Innovation: “Data sharing is fundamental to effective government and the running of public services. But it is not an end in itself. Data needs to be shared to drive improvements in service delivery and benefit citizens. For this to happen sustainably and effectively, public trust in the way data is shared and used is vital. Without such trust, the government and wider public sector risks losing society’s consent, setting back innovation as well as the smooth running of public services. Maximising the benefits of data driven technology therefore requires a solid foundation of societal approval.

AI and data driven technology offer extraordinary potential to improve decision making and service delivery in the public sector – from improved diagnostics to more efficient infrastructure and personalised public services. This makes effective use of data more important than it has ever been, and requires a step-change in the way data is shared and used. Yet sharing more data also poses risks and challenges to current governance arrangements.

The only way to build trust sustainably is to operate in a trustworthy way. Without adequate safeguards the collection and use of personal data risks changing power relationships between the citizen and the state. Insights derived by big data and the matching of different data sets can also undermine individual privacy or personal autonomy. Trade-offs are required which reflect democratic values, wider public acceptability and a shared vision of a data driven society. CDEI has a key role to play in exploring this challenge and setting out how it can be addressed. This report identifies barriers to data sharing, but focuses on building and sustaining the public trust which is vital if society is to maximise the benefits of data driven technology.

There are many areas where the sharing of anonymised and identifiable personal data by the public sector already improves services, prevents harm, and benefits the public. Over the last 20 years, different governments have adopted various measures to increase data sharing, including creating new legal sharing gateways. However, despite efforts to increase the amount of data sharing across the government, and significant successes in areas like open data, data sharing continues to be challenging and resource-intensive. This report identifies a range of technical, legal and cultural barriers that can inhibit data sharing.

Barriers to data sharing in the public sector

Technical barriers include limited adoption of common data standards and inconsistent security requirements across the public sector. Such inconsistency can prevent data sharing, or increase the cost and time for organisations to finalise data sharing agreements.

While there are often pre-existing legal gateways for data sharing, underpinned by data protection legislation, there is still a large amount of legal confusion on the part of public sector bodies wishing to share data which can cause them to start from scratch when determining legality and commit significant resources to legal advice. It is not unusual for the development of data sharing agreements to delay the projects for which the data is intended. While the legal scrutiny of data sharing arrangements is an important part of governance, improving the efficiency of these processes – without sacrificing their rigour – would allow data to be shared more quickly and at less expense.

Even when legal, the permissive nature of many legal gateways means significant cultural and organisational barriers to data sharing remain. Individual departments and agencies decide whether or not to share the data they hold and may be overly risk averse. Data sharing may not be prioritised by a department if it would require them to bear costs to deliver benefits that accrue elsewhere (i.e. to those gaining access to the data). Departments sharing data may need to invest significant resources to do so, as well as considering potential reputational or legal risks. This may hold up progress towards finding common agreement on data sharing. When there is an absence of incentives, even relatively small obstacles may mean data sharing is not deemed worthwhile by those who hold the data – despite the fact that other parts of the public sector might benefit significantly….(More)”.

Open Government Playbook


Gov.UK: “The document provides guidance and advice to help policy officials follow open government principles when carrying out their work…

The Playbook has been developed as a response to the growing demand from policymakers, communications, and digital professionals to integrate the principles of open government in their roles. The content of the Playbook was drafted using existing resources (see the ‘further reading’ section), and was consulted with open government experts from Involve and Open Government Partnership….(More)”.

Ethical and Legal Aspects of Open Data Affecting Farmers


Report by Foteini Zampati et al: “Open Data offers a great potential for innovations from which the agricultural sector can benefit decisively due to a wide range of possibilities for further use. However, there are many inter-linked issues in the whole data value chain that affect the ability of farmers, especially the poorest and most vulnerable, to access, use and harness the benefits of data and data-driven technologies.

There are technical challenges and ethical and legal challenges as well. Of all these challenges, the ethical and legal aspects related to accessing and using data by the farmers and sharing farmers’ data have been less explored.

We aimed to identify gaps and highlight the often-complex legal issues related to open data in the areas of law (e.g. data ownership, data rights) policies, codes of conduct, data protection, intellectual property rights, licensing contracts and personal privacy.

This report is an output of the Kampala INSPIRE Hackathon 2020. The Hackathon addressed key topics identified by the IST-Africa 2020 conference, such as: Agriculture, environmental sustainability, collaborative open innovation, and ICT-enabled entrepreneurship.

The goal of the event was to continue to build on the efforts of the 2019 Nairobi INSPIRE Hackathon, further strengthening relationships between various EU projects and African communities. It was a successful event, with more than 200 participants representing 26 African countries. The INSPIRE Hackathons are not a competition, rather the main focus is building relationships, making rapid developments, and collecting ideas for future research and innovation….(More)”.

How to engage with policy makers: A guide for academics in the arts and humanities


Institute for Government: “The Arts and Humanities Research Council and the Institute for Government have been working in partnership for six years on the Engaging with Government programme – a three-day course for researchers in the arts and humanities. This programme helps academics develop the knowledge and skills they need to engage effectively with government and parliamentary bodies at all levels, along with the other organisations involved in the policy-making process. We, in turn, have learned a huge amount from our participants, who now form an active alumni network brimming with expertise about how to engage with policy in practice. This guide brings together some of that learning.

Arts and humanities researchers tend to have fewer formal and established routes into government than scientists. But they can, and do, engage productively in policy making. They contribute both expertise (advice based on knowledge of a field) and evidence (facts and information) and provide new ways of framing policy debates that draw on philosophical, cultural or historical perspectives.

As this guide shows, there are steps that academics can take to improve their engagement with public policy and to make it meaningful for their research. While these activities may involve an investment of time, they offer the opportunity to make a tangible difference, and are often a source of great satisfaction and inspiration for further work.

The first part of this guide describes the landscape of policy making in the UK and some of the common ways academics can engage with it.

Part two sets out six lessons from the Engaging with Government programme, illustrated with practical examples from our alumni and speaker network. These lessons are:

  • Understand the full range of individuals and groups involved in policy making – who are the key players and who do they talk to?
  • Be aware of the political context – how does your research fit in with current thinking on the issue?
  • Communicate in ways that policy makers find useful – consider your audience and be prepared to make practical recommendations.
  • Develop and maintain networks – seek to make connections with people who share your policy interest, both in person and online.
  • Remember that you are the expert – be prepared to share your general knowledge of a subject as well as your specific research.
  • Adopt a long-term perspective – you will need to be open-minded and patient to engage successfully….(More)”.

Indigenous Protocol and Artificial Intelligence


Indigenous Protocol and Artificial Intelligence Working Group: “This position paper on Indigenous Protocol (IP) and Artificial Intelligence (AI) is a starting place for those who want to design and create AI from an ethical position that centers Indigenous concerns. Each Indigenous community will have its own particular approach to the questions we raise in what follows. What we have written here is not a substitute for establishing and maintaining relationships of reciprocal care and support with specific Indigenous communities. Rather, this document offers a range of ideas to take into consideration when entering into conversations which prioritize Indigenous perspectives in the development of artificial intelligence.

The position paper is an attempt to capture multiple layers of a discussion that happened over 20 months, across 20 time zones, during two workshops, and between Indigenous people (and a few non-Indigenous folks) from diverse communities in Aotearoa, Australia, North America, and the Pacific.

Our aim, however, is not to provide a unified voice. Indigenous ways of knowing are rooted in distinct, sovereign territories across the planet. These extremely diverse landscapes and histories have influenced different communities and their discrete cultural protocols over time. A single ‘Indigenous perspective’ does not exist, as epistemologies are motivated and shaped by the grounding of specific communities in particular territories. Historically, scholarly traditions that homogenize diverse Indigenous cultural practices have resulted in ontological and epistemological violence, and a flattening of the rich texture and variability of Indigenous thought….(More)”.

The Data Delusion: Protecting Individual Data is Not Enough When the Harm is Collective


Essay by Martin Tisné: “On March 17, 2018, questions about data privacy exploded with the scandal of the previously unknown consulting company Cambridge Analytica. Lawmakers are still grappling with updating laws to counter the harms of big data and AI. In the Spring of 2020, the Covid-19 pandemic brought questions about sufficient legal protections back to the public debate, with urgent warnings about the privacy implications of contact tracing apps. But the surveillance consequences of the pandemic’s aftermath are much bigger than any app: transport, education, health
systems and offices are being turned into vast surveillance networks. If we only consider individual trade-offs between privacy sacrifices and alleged health benefits, we will miss the point. The collective nature of big data means people are more impacted by other people’s data than by data about them. Like climate change, the threat is societal and personal.

In the era of big data and AI, people can suffer because of how the sum of individual data is analysed and sorted into groups by algorithms. Novel forms of collective data-driven harms are appearing as a result: online housing, job and credit ads discriminating on the basis of race and gender, women disqualified from jobs on the basis of gender and foreign actors targeting light-right groups, pulling them to the far-right.2 Our public debate, governments, and laws are ill-equipped to deal with these collective, as opposed to individual, harms….(More)”.