Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World


Paper by Jennifer King, Caroline Meinhardt: “In this paper, we present a series of arguments and predictions about how existing and future privacy and data protection regulation will impact the development and deployment of AI systems.

➜ Data is the foundation of all AI systems. Going forward, AI development will continue to increase developers’ hunger for training data, fueling an even greater race for data acquisition than we have already seen in past decades.

➜ Largely unrestrained data collection poses unique risks to privacy that extend beyond the individual level—they aggregate to pose societal-level harms that cannot be addressed through the exercise of individual data rights alone.

➜ While existing and proposed privacy legislation, grounded in the globally accepted Fair Information Practices (FIPs), implicitly regulate AI development, they are not sufficient to address the data acquisition race as well as the resulting individual and systemic privacy harms.

➜ Even legislation that contains explicit provisions on algorithmic decision-making and other forms of AI does not provide the data governance measures needed to meaningfully regulate the data used in AI systems.

➜ We present three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:

1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.

2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.

3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences…(More)”.

Research Project Management and Leadership


Book by P. Alison Paprica: “The project management approaches, which are used by millions of people internationally, are often too detailed or constraining to be applied to research. In this handbook, project management expert P. Alison Paprica presents guidance specifically developed to help with the planning, management, and leadership of research.

Research Project Management and Leadership provides simplified versions of globally utilized project management tools, such as the work breakdown structure to visualize scope, and offers guidance on processes, including a five-step process to identify and respond to risks. The complementary leadership guidance in the handbook is presented in the form of interview write-ups with 19 Canadian and international research leaders, each of whom describes a situation where leadership skills were important, how they responded, and what they learned. The accessible language and practical guidance in the handbook make it a valuable resource for everyone from principal investigators leading multimillion-dollar projects to graduate students planning their thesis research. The book aims to help readers understand which management and leadership tools, processes, and practices are helpful in different circumstances, and how to implement them in research settings…(More)”.

i.AI Consultation Analyser


New Tool by AI.Gov.UK: “Public consultations are a critical part of the process of making laws, but analysing consultation responses is complex and very time consuming. Working with the No10 data science team (10DS), the Incubator for Artificial Intelligence (i.AI) is developing a tool to make the process of analysing public responses to government consultations faster and fairer.

The Analyser uses AI and data science techniques to automatically extract patterns and themes from the responses, and turns them into dashboards for policy makers.

The goal is for computers to do what they are best at: finding patterns and analysing large amounts of data. That means humans are free to do the work of understanding those patterns.

Screenshot showing donut chart for those who agree or disagree, and a bar chart showing popularity of prevalent themes

Government runs 700-800 consultations a year on matters of importance to the public. Some are very small, but a large consultation might attract hundreds of thousands of written responses.

A consultation attracting 30,000 responses requires a team of around 25 analysts for 3 months to analyse the data and write the report. And it’s not unheard of to get double that number

If we can apply automation in a way that is fair, effective and accountable, we could save most of that £80m…(More)”

Participatory democracy in the EU should be strengthened with a Standing Citizens’ Assembly


Article by James Mackay and Kalypso Nicolaïdis: “EU citizens have multiple participatory instruments at their disposal, from the right to petition the European Parliament (EP) to the European Citizen’s Initiative (ECI), from the European Commission’s public online consultation and Citizens’ Dialogues to the role of the European Ombudsman as an advocate for the public vis-à-vis the EU institutions.

While these mechanisms are broadly welcome they have – unfortunately – remained too timid and largely ineffective in bolstering bottom-up participation. They tend to involve experts and organised interest groups rather than ordinary citizens. They don’t encourage debates on non-experts’ policy preferences and are executed too often at the discretion of the political elites to  justify pre-existing policy decisions.

In short, they feel more like consultative mechanisms than significant democratic innovations. That’s why the EU should be bold and demonstrate its democratic leadership by institutionalising its newly-created Citizens’ Panels into a Standing Citizens’ Assembly with rotating membership chosen by lot and renewed on a regular basis…(More)”.

Blockchain and public service delivery: a lifetime cross-referenced model for e-government


Paper by Maxat Kassen: “The article presents the results of field studies, analysing the perspectives of blockchain developers on decentralised service delivery and elaborating on unique algorithms for lifetime ledgers to reliably and safely record e-government transactions in an intrinsically cross-referenced manner. New interesting technological niches of service delivery and emerging models of related data management in the industry were proposed and further elaborated such as the generation of unique lifetime personal data profiles, blockchain-driven cross-referencing of e-government metadata, parallel maintenance of serviceable ledgers for data identifiers and phenomena of blockchain ‘black holes’ to ensure reliable protection of important public, corporate and civic information…(More)”.

How Mental Health Apps Are Handling Personal Information


Article by Erika Solis: “…Before diving into the privacy policies of mental health apps, it’s necessary to distinguish between “personal information” and “sensitive information,” which are both collected by such apps. Personal information can be defined as information that is “used to distinguish or trace an individual’s identity.” Sensitive information, however, can be any data that, if lost, misused, or illegally modified, may negatively affect an individual’s privacy rights. While health information not under HIPAA has previously been treated as general personal information, states like Washington are implementing strong legislation that will cover a wide range of health data as sensitive, and have attendant stricter guidelines.

Legislation addressing the treatment of personal information and sensitive information varies around the world. Regulations like the General Data Protection Regulation (GDPR) in the EU, for example, require all types of personal information to be treated as being of equal importance, with certain special categories, including health data having slightly elevated levels of protection. Meanwhile, U.S. federal laws are limited in addressing applicable protections of information provided to a third party, so mental health app companies based in the United States can approach personal information in all sorts of ways. For instance, Mindspa, an app with chatbots that are only intended to be used when a user is experiencing an emergency, and Elomia, a mental health app that’s meant to be used at any time, don’t make distinctions between these contexts in their privacy policies. They also don’t distinguish between the potentially different levels of sensitivity associated with ordinary and crisis use.

Wysa, on the other hand, clearly indicates how it protects personal information. Making a distinction between personal and sensitive data, its privacy policy notes that all health-based information receives additional protection. Similarly, Limbic labels everything as personal information but notes that data, including health, genetic, and biometric, fall within a “special category” that requires more explicit consent than other personal information collected to be used…(More)”.

How Big Tech let down Navalny


Article by Ellery Roberts Biddle: “As if the world needed another reminder of the brutality of Vladimir Putin’s Russia, last Friday we learned of the untimely death of Alexei Navalny. I don’t know if he ever used the term, but Navalny was what Chinese bloggers might have called a true “netizen” — a person who used the internet to live out democratic values and systems that didn’t exist in their country.

Navalny’s work with the Anti-Corruption Foundation reached millions using major platforms like YouTube and LiveJournal. But they built plenty of their own technology too. One of their most famous innovations was “Smart Voting,” a system that could estimate which opposition candidates were most likely to beat out the ruling party in a given election. The strategy wasn’t to support a specific opposition party or candidate — it was simply to unseat members of the ruling party, United Russia. In regional races in 2020, it was credited with causing United Russia to lose its majority in state legislatures in Novosibirsk, Tambov and Tomsk.

The Smart Voting system was pretty simple — just before casting a ballot, any voter could check the website or the app to decide where to throw their support. But on the eve of national parliamentary elections in September 2021, Smart Voting suddenly vanished from the app stores for both Google and Apple. 

After a Moscow court banned Navalny’s organization for being “extremist,” Russia’s internet regulator demanded that both Apple and Google remove Smart Voting from their app stores. The companies bowed to the Kremlin and complied. YouTube blocked select Navalny videos in Russia and Google, its parent company, even blocked some public Google Docs that the Navalny team published to promote names of alternative candidates in the election. 

We will never know whether or not Navalny’s innovative use of technology to stand up to the dictator would have worked. But Silicon Valley’s decision to side with Putin was an important part of why Navalny’s plan failed…(More)”.

The US Is Jeopardizing the Open Internet


Article by Natalie Dunleavy Campbell & Stan Adams: “Last October, the United States Trade Representative (USTR) abandoned its longstanding demand for World Trade Organization provisions to protect cross-border data flows, prevent forced data localization, safeguard source codes, and prohibit countries from discriminating against digital products based on nationality. It was a shocking shift: one that jeopardizes the very survival of the open internet, with all the knowledge-sharing, global collaboration, and cross-border commerce that it enables.

The USTR says that the change was necessary because of a mistaken belief that trade provisions could hinder the ability of US Congress to respond to calls for regulation of Big Tech firms and artificial intelligence. But trade agreements already include exceptions for legitimate public-policy concerns, and Congress itself has produced research showing that trade deals cannot impede its policy aspirations. Simply put, the US – as with other countries involved in WTO deals – can regulate its digital sector without abandoning its critical role as a champion of the open internet.

The potential consequences of America’s policy shift are as far-reaching as they are dangerous. Fear of damaging trade ties with the US has long deterred other actors from imposing national borders on the internet. Now, those who have heard the siren song of supposed “digital sovereignty” as a means to ensure their laws are obeyed in the digital realm have less reason to resist it. The more digital walls come up, the less the walled-off portions resemble the internet.

Several countries are already trying to replicate China’s heavy-handed approach to data governance. Rwanda’s data-protection law, for instance, forces companies to store data within its border unless otherwise permitted by its cybersecurity regulator – making personal data vulnerable to authorities known to use data from private messages to prosecute dissidents. At the same time, a growing number of democratic countries are considering regulations that, without strong safeguards for cross-border data flows, could have a similar effect of disrupting access to a truly open internet…(More)”.

Data as a catalyst for philanthropy


Article by Stefaan Verhulst: “…In what follows, we offer five thoughts on how to advance Data Driven Philanthropy. These are operational strategies, specific steps that philanthropic organisations can take in order to harness the potential of data for the public good. At its broadest level, then, this article is about data stewardship in the 21st century. We seek to define how philanthropic organisations can be responsible custodians of data assets, both theirs and those of society at large. Fulfilling this role of data stewardship is a critical mission for the philanthropic sector and one of the most important roles it can play in helping to ensure that our ongoing process of digital transformation is more fair, inclusive, and aligned with the broader public interest…(More)”.

Unlocking Technology for Peacebuilding: The Munich Security Conference’s Role in Empowering a Peacetech Movement


Article by Stefaan Verhulst and Artur Kluz: “This week’s annual Munich Security Conference is taking place amid a turbulent backdrop. The so-called “peace dividend” that followed the end of the Cold War has long since faded. From Ukraine to Sudan to the Middle East, we are living in an era marked by increasingly unstable geopolitics and renewed–and new forms of–violent conflict. Recently, the Uppsala Conflict Data Program, measuring war since 1945, identified 2023 as the worst on record since the Cold War. As the Foreword to the Munich Security Report, issued alongside the Conference, notes: “Unfortunately, this year’s report reflects a downward trend in world politics, marked by an increase in geopolitical tensions and economic uncertainty.”

As we enter deeper into this violent era, it is worth considering the role of technology. It is perhaps no coincidence that a moment of growing peril and division coincides with the increasing penetration of technologies such as smartphones and social media, or with the emergence of new technologies such as artificial intelligence (AI) and virtual reality. In addition, the actions of satellite operators and cross-border digital payment networks have been thrust into the limelight, with their roles in enabling or precipitating conflict attracting increasing scrutiny. Today, it appears increasingly clear that transnational tech actors–and technology itself–are playing a more significant role in geopolitical conflict than ever before. As the Munich Security Report notes, “Technology has gone from being a driver of global prosperity to being a central means of geopolitical competition.”

It doesn’t have to be this way. While much attention is paid to technology’s negative capabilities, this article argues that technology can also play a more positive role, through the contributions of what is sometimes referred to as Peacetech. Peacetech is an emerging field, encompassing technologies as varied as early warning systemsAI driven predictions, and citizen journalism platforms. Broadly, its aims can be described as preventing conflict, mediating disputes, mitigating human suffering, and protecting human dignity and universal human rights. In the words of the United Nations Institute for Disarmament Research (UNIDIR), “Peacetech aims to leverage technology to drive peace while also developing strategies to prevent technology from being used to enable violence.”This article is intended as a call to those attending the Munich Security Conference to prioritize Peacetech — at a global geopolitical forum for peacebuilding. Highlighting recent concerns over the role of technology in conflict–with a particular emphasis on the destructive potential of AI and satellite systems–we argue for technology’s positive potential instead, by promoting peace and mitigating conflict. In particular, we suggest the need for a realignment in how policy and other stakeholders approach and fund technology, to foster its peaceful rather than destructive potential. This realignment would bring out the best in technology; it would harness technology toward the greater public good at a time of rising geopolitical uncertainty and instability…(More)”.