Data Privacy Budget and Solutions Forecast


Survey by FTI Consulting: “…reported significant increases in spend and data privacy-related programs. Though respondents are increasing their emphasis on privacy compliance, the results showed that many are also willing to take risks in the interest of tapping into the value of their data. Still others believe that “good faith” efforts will improve their position with regulators. Key findings include:

  • 97 percent of organizations will increase their spend on data privacy in the coming year, with nearly one-third indicating plans to increase budgets by between 90 percent and more than 100 percent.
  • 78 percent agreed with the statement: “The value of data is encouraging organizations to find ways to avoid complying fully with data privacy regulation.”
  • 87 percent of respondents believed that steps toward compliance will mitigate regulatory scrutiny. More than half strongly agreed with this idea.
  • 44 percent said they expect lack of awareness and training to be the key data privacy challenge of the coming year.             

In terms of solutions, respondents indicated a diverse array of techniques for the coming year, and only 6 percent said they had no plans for change. The top-rated solutions set for implementation over the next 12 months included establishing a clear, consistent set of data privacy standards, updating agreements and contracts with external parties, reviewing standard data privacy practices of supply chains and building privacy-by-design programs….(More)”.

Big data, privacy and COVID-19 – learning from humanitarian expertise in data protection


Andrej Zwitter & Oskar J. Gstrein at the Journal of International Humanitarian Action: “The use of location data to control the coronavirus pandemic can be fruitful and might improve the ability of governments and research institutions to combat the threat more quickly. It is important to note that location data is not the only useful data that can be used to curb the current crisis. Genetic data can be relevant for AI enhanced searches for vaccines and monitoring online communication on social media might be helpful to keep an eye on peace and security (Taulli n.d.). However, the use of such large amounts of data comes at a price for individual freedom and collective autonomy. The risks of the use of such data should ideally be mitigated through dedicated legal frameworks which describe the purpose and objectives of data use, its collection, analysis, storage and sharing, as well as the erasure of ‘raw’ data once insights have been extracted. In the absence of such clear and democratically legitimized norms, one can only resort to fundamental rights provisions such as Article 8 paragraph 2 of the ECHR that reminds us that any infringement of rights such as privacy need to be in accordance with law, necessary in a democratic society, pursuing a legitimate objective and proportionate in their application.

However as shown above, legal frameworks including human rights standards are currently not capable of effectively ensuring data protection, since they focus too much on the individual as the point of departure. Hence, we submit that currently applicable guidelines and standards for responsible data use in the humanitarian sector should also be fully applicable to corporate, academic and state efforts which are currently enacted to curb the COVID-19 crisis globally. Instead of ‘re-calibrating’ the expectations of individuals on their own privacy and collective autonomy, the requirements for the use of data should be broader and more comprehensive. Applicable principles and standards as developed by OCHA, the 510 project of the Dutch Red Cross, or by academic initiatives such as the Signal Code are valid minimum standards during a humanitarian crisis. Hence, they are also applicable minimum standards during the current pandemic.

Core findings that can be extracted from these guidelines and standards for the practical implementation into data driven responses to COVIC-19 are:

  • data sensitivity is highly contextual; one and the same data can be sensitive in different contexts. Location data during the current pandemic might be very useful for epidemiological analysis. However, if (ab-)used to re-calibrate political power relations, data can be open for misuse. Hence, any party supplying data and data analysis needs to check whether data and insights can be misused in the context they are presented.
  • privacy and data protection are important values; they do not disappear during a crisis. Nevertheless, they have to be weighed against respective benefits and risks.
  • data-breaches are inevitable; with time (t) approaching infinity, the chance of any system being hacked or becoming insecure approaches 100%. Hence, it is not a question of whether, but when. Therefore, organisations have to prepare sound data retention and deletion policies.
  • data ethics is an obligation to provide high quality analysis; using machine learning and big data might be appealing for the moment, but the quality of source data might be low, and results might be unreliable, or even harmful. Biases in incomplete datasets, algorithms and human users are abundant and widely discussed. We must not forget that in times of crisis, the risk of bias is more pronounced, and more problematic due to the vulnerability of data subjects and groups. Therefore, working to the highest standards of data processing and analysis is an ethical obligation.

The adherence to these principles is particularly relevant in times of crisis such as now, where they mark the difference between societies that focus on control and repression on the one hand, and those who believe in freedom and autonomy on the other. Eventually, we will need to think of including data policies into legal frameworks for state of emergency regulations, and coordinate with corporate stakeholders as well as private organisations on how to best deal with such crises. Data-driven practices have to be used in a responsible manner. Furthermore, it will be important to observe whether data practices and surveillance assemblages introduced under current circumstances will be rolled back to status quo ante when returning to normalcy. If not, our rights will become hollowed out, just waiting for the next crisis to eventually become irrelevant….(More)”.

Governing Privacy in the Datafied City


Paper by Ira Rubinstein and Bilyana Petkova: “Privacy — understood in terms of freedom from identification, surveillance and profiling — is a precondition of the diversity and tolerance that define the urban experience, But with “smart” technologies eroding the anonymity of city sidewalks and streets, and turning them into surveilled spaces, are cities the first to get caught in the line of fire? Alternatively, are cities the final bastions of privacy? Will the interaction of tech companies and city governments lead cities worldwide to converge around the privatization of public spaces and monetization of data with little to no privacy protections? Or will we see different city identities take root based on local resistance and legal action?

This Article delves into these questions from a federalist and localist angle. In contrast to other fields in which American cities lack the formal authority to govern, we show that cities still enjoy ample powers when it comes to privacy regulation. Fiscal concerns, rather than state or federal preemption, play a role in privacy regulation, and the question becomes one of how cities make use of existing powers. Populous cosmopolitan cities, with a sizeable market share and significant political and cultural clout, are in particularly noteworthy positions to take advantage of agglomeration effects and drive hard deals when interacting with private firms. Nevertheless, there are currently no privacy front runners or privacy laggards; instead, cities engage in “privacy activism” and “data stewardship.”

First, as privacy activists, U.S. cities use public interest litigation to defend their citizens’ personal information in high profile political participation and consumer protection cases. Examples include legal challenges to the citizenship question in the 2020 Census, and to instances of data breach including Facebook third-party data sharing practices and the Equifax data breach. We link the Census 2020 data wars to sanctuary cities’ battles with the federal administration to demonstrate that political dissent and cities’ social capital — diversity — are intrinsically linked to privacy. Regarding the string of data breach cases, cities expand their experimentation zone by litigating privacy interests against private parties.

Second, cities as data stewards use data to regulate their urban environment. As providers of municipal services, they collect, analyze and act on a broad range of data about local citizens or cut deals with tech companies to enhance transit, housing, utility, telecom, and environmental services by making them smart while requiring firms like Uber and Airbnb to share data with city officials. This has proven contentious at times but in both North American and European cities, open data and more cooperative forms of data sharing between the city, commercial actors, and the public have emerged, spearheaded by a transportation data trust in Seattle. This Article contrasts the Seattle approach with the governance and privacy deficiencies accompanying the privately-led Quayside smart city project in Toronto. Finally, this Article finds the data trust model of data sharing to hold promise, not least since the European rhetoric of exclusively city-owned data presented by Barcelona might prove difficult to realize in practice….(More)”.

An Artificial Revolution: On Power, Politics and AI


Book by Ivana Bartoletti: “AI has unparalleled transformative potential to reshape society but without legal scrutiny, international oversight and public debate, we are sleepwalking into a future written by algorithms which encode regressive biases into our daily lives. As governments and corporations worldwide embrace AI technologies in pursuit of efficiency and profit, we are at risk of losing our common humanity: an attack that is as insidious as it is pervasive.

Leading privacy expert Ivana Bartoletti exposes the reality behind the AI revolution, from the low-paid workers who train algorithms to recognise cancerous polyps, to the rise of data violence and the symbiotic relationship between AI and right-wing populism.

Impassioned and timely, An Artificial Revolution is an essential primer to understand the intersection of technology and geopolitical forces shaping the future of civilisation, and the political response that will be required to ensure the protection of democracy and human rights….(More)”.

Examining the Black Box: Tools for Assessing Algorithmic Systems


Report by the Ada Lovelace Institute and DataKind UK: “As algorithmic systems become more critical to decision making across many parts of society, there is increasing interest in how they can be scrutinised and assessed for societal impact, and regulatory and normative compliance.

This report is primarily aimed at policymakers, to inform more accurate and focused policy conversations. It may also be helpful to anyone who creates, commissions or interacts with an algorithmic system and wants to know what methods or approaches exist to assess and evaluate that system…

Clarifying terms and approaches

Through literature review and conversations with experts from a range of disciplines, we’ve identified four prominent approaches to assessing algorithms that are often referred to by just two terms: algorithm audit and algorithmic impact assessment. But there is not always agreement on what these terms mean among different communities: social scientists, computer scientists, policymakers and the general public have different interpretations and frames of reference.

While there is broad enthusiasm among policymakers for algorithm audits and impact assessments, there is often lack of detail about the approaches being discussed. This stems both from the confusion of terms, but also from the different maturity of the approaches the terms describe.

Clarifying which approach we’re referring to, as well as where further research is needed, will help policymakers and practitioners to do the more vital work of building evidence and methodology to take these approaches forward.

We focus on algorithm audit and algorithmic impact assessment. For each, we identify two key approaches the terms can be interpreted as:

  • Algorithm audit
    • Bias audit: a targeted, non-comprehensive approach focused on assessing algorithmic systems for bias
    • Regulatory inspection: a broad approach, focused on an algorithmic system’s compliance with regulation or norms, necessitating a number of different tools and methods; typically performed by regulators or auditing professionals
  • Algorithmic impact assessment
    • Algorithmic risk assessment: assessing possible societal impacts of an algorithmic system before the system is in use (with ongoing monitoring often advised)
    • Algorithmic impact evaluation: assessing possible societal impacts of an algorithmic system on the users or population it affects after it is in use…(More)”.

.

Responsible Data Toolkit


Andrew Young at The GovLab: “The GovLab and UNICEF, as part of the Responsible Data for Children initiative (RD4C), are pleased to share a set of user-friendly tools to support organizations and practitioners seeking to operationalize the RD4C Principles. These principles—Purpose-Driven, People-Centric, Participatory, Protective of Children’s Rights, Proportional, Professionally Accountable, and Prevention of Harms Across the Data Lifecycle—are especially important in the current moment, as actors around the world are taking a data-driven approach to the fight against COVID-19.

The initial components of the RD4C Toolkit are:

The RD4C Data Ecosystem Mapping Tool intends to help users to identify the systems generating data about children and the key components of those systems. After using this tool, users will be positioned to understand the breadth of data they generate and hold about children; assess data systems’ redundancies or gaps; identify opportunities for responsible data use; and achieve other insights.

The RD4C Decision Provenance Mapping methodology provides a way for actors designing or assessing data investments for children to identify key decision points and determine which internal and external parties influence those decision points. This distillation can help users to pinpoint any gaps and develop strategies for improving decision-making processes and advancing more professionally accountable data practices.

The RD4C Opportunity and Risk Diagnostic provides organizations with a way to take stock of the RD4C principles and how they might be realized as an organization reviews a data project or system. The high-level questions and prompts below are intended to help users identify areas in need of attention and to strategize next steps for ensuring more responsible handling of data for and about children across their organization.

Finally, the Data for Children Collaborative with UNICEF developed an Ethical Assessment that “forms part of [their] safe data ecosystem, alongside data management and data protection policies and practices.” The tool reflects the RD4C Principles and aims to “provide an opportunity for project teams to reflect on the material consequences of their actions, and how their work will have real impacts on children’s lives.

RD4C launched in October 2019 with the release of the RD4C Synthesis ReportSelected Readings, and the RD4C Principles. Last month we published the The RD4C Case Studies, which analyze data systems deployed in diverse country environments, with a focus on their alignment with the RD4C Principles. The case studies are: Romania’s The Aurora ProjectChildline Kenya, and Afghanistan’s Nutrition Online Database.

To learn more about Responsible Data for Children, visit rd4c.org or contact rd4c [at] thegovlab.org. To join the RD4C conversation and be alerted to future releases, subscribe at this link.”

Digital tools against COVID-19: Framing the ethical challenges and how to address them


Paper by Urs Gasser et al: “Data collection and processing via digital public health technologies are being promoted worldwide by governments and private companies as strategic remedies for mitigating the COVID-19 pandemic and loosening lockdown measures. However, the ethical and legal boundaries of deploying digital tools for disease surveillance and control purposes are unclear, and a rapidly evolving debate has emerged globally around the promises and risks of mobilizing digital tools for public health. To help scientists and policymakers navigate technological and ethical uncertainty, we present a typology of the primary digital public health applications currently in use. Namely: proximity and contact tracing, symptom monitoring, quarantine control, and flow modeling. For each, we discuss context-specific risks, cross-sectional issues, and ethical concerns. Finally, in recognition of the need for practical guidance, we propose a navigation aid for policymakers made up of ten steps for the ethical use of digital public health tools….(More)”.

Can We Track COVID-19 and Protect Privacy at the Same Time?


Sue Halpern at the New Yorker: “…Location data are the bread and butter of “ad tech.” They let marketers know you recently shopped for running shoes, are trying to lose weight, and have an abiding affection for kettle corn. Apps on cell phones emit a constant trail of longitude and latitude readings, making it possible to follow consumers through time and space. Location data are often triangulated with other, seemingly innocuous slivers of personal information—so many, in fact, that a number of data brokers claim to have around five thousand data points on almost every American. It’s a lucrative business—by at least one estimate, the data-brokerage industry is worth two hundred billion dollars. Though the data are often anonymized, a number of studies have shown that they can be easily unmasked to reveal identities—names, addresses, phone numbers, and any number of intimacies.

As Buckee knew, public-health surveillance, which serves the community at large, has always bumped up against privacy, which protects the individual. But, in the past, public-health surveillance was typically conducted by contract tracing, with health-care workers privately interviewing individuals to determine their health status and trace their movements. It was labor-intensive, painstaking, memory-dependent work, and, because of that, it was inherently limited in scope and often incomplete or inefficient. (At the start of the pandemic, there were only twenty-two hundred contact tracers in the country.)

Digital technologies, which work at scale, instantly provide detailed information culled from security cameras, license-plate readers, biometric scans, drones, G.P.S. devices, cell-phone towers, Internet searches, and commercial transactions. They can be useful for public-health surveillance in the same way that they facilitate all kinds of spying by governments, businesses, and malign actors. South Korea, which reported its first covid-19 case a month after the United States, has achieved dramatically lower rates of infection and mortality by tracking citizens with the virus via their phones, car G.P.S. systems, credit-card transactions, and public cameras, in addition to a robust disease-testing program. Israel enlisted Shin Bet, its secret police, to repurpose its terrorist-tracking protocols.  China programmed government-installed cameras to point at infected people’s doorways to monitor their movements….(More)”.

EDPB Adopts Guidelines on the Processing of Health Data During COVID-19


Hunton Privacy Blog: “On April 21, 2020, the European Data Protection Board (“EDPB”) adopted Guidelines on the processing of health data for scientific purposes in the context of the COVID-19 pandemic. The aim of the Guidelines is to provide clarity on the most urgent matters relating to health data, such as legal basis for processing, the implementation of adequate safeguards and the exercise of data subject rights.

The Guidelines note that the General Data Protection Regulation (“GDPR”) provides a specific derogation to the prohibition on processing of sensitive data under Article 9, for scientific purposes. With respect to the legal basis for processing, the Guidelines state that consent may be relied on under both Article 6 and the derogation to the prohibition on processing under Article 9 in the context of COVID-19, as long as the requirements for explicit consent are met, and as long as there is no power imbalance that could pressure or disadvantage a reluctant data subject. Researchers should keep in mind that study participants must be able to withdraw their consent at any time. National legislation may also provide an appropriate legal basis for the processing of health data and a derogation to the Article 9 prohibition. Furthermore, national laws may restrict data subject rights, though these restrictions should apply only as is strictly necessary.

In the context of transfers to countries outside the European Economic Area that have not been deemed adequate by the European Commission, the Guidelines note that the “public interest” derogation to the general prohibition on such transfers may be relied on, as well as explicit consent. The Guidelines add, however, that these derogations should only be relied on as a temporary measure and not for repetitive transfers.

The Guidelines highlight the importance of complying with the GDPR’s data protection principles, particularly with respect to transparency. Ideally, notice of processing as part of a research project should be provided to the relevant data subject before the project commences, if data has not been collected directly from the individual, in order to allow the individual to exercise their rights under the GDPR. There may be instances where, considering the number of data subjects, the age of the data and the safeguards in place, it would be impossible or require disproportionate effort to provide notice, in which case researchers may be able to rely on the exemptions set out under Article 14 of the GDPR.

The Guidelines also highlight that processing for scientific purposes is generally not considered incompatible with the purposes for which data is originally collected, assuming that the principles of data minimization, integrity, confidentiality and data protection by design and by default are complied with (See Guidelines)”.

How data privacy leader Apple found itself in a data ethics catastrophe


Article by Daniel Wu and Mike Loukides: “…Apple learned a critical lesson from this experience. User buy-in cannot end with compliance with rules. It requires ethics, constantly asking how to protect, fight for, and empower users, regardless of what the law says. These strategies contribute to perceptions of trust.

Trust has to be earned, is easily lost, and is difficult to regain….

In our more global, diverse, and rapidly- changing world, ethics may be embodied by the “platinum rule”: Do unto others as they would want done to them. One established field of ethics—bioethics—offers four principles that are related to the platinum rule: nonmaleficence, justice, autonomy, and beneficence.

For organizations that want to be guided by ethics, regardless of what the law says, these principles as essential tools for a purpose-driven mission: protecting (nonmaleficence), fighting for (justice), and empowering users and employees (autonomy and beneficence).

An ethics leader protects users and workers in its operations by using governance best practices. 

Before creating the product, it understands both the qualitative and quantitative contexts of key stakeholders, especially those who will be most impacted, identifying their needs and fears. When creating the product, it uses data protection by design, working with cross-functional roles like legal and privacy engineers to embed ethical principles into the lifecycle of the product and formalize data-sharing agreements. Before launching, it audits the product thoroughly and conducts scenario planning to understand potential ethical mishaps, such as perceived or real gender bias or human rights violations in its supply chain. After launching, its terms of service and collection methods are highly readable and enables even disaffected users to resolve issues delightfully.

Ethics leaders also fight for users and workers, who can be forgotten. These leaders may champion enforceable consumer protections in the first place, before a crisis erupts. With social movements, leaders fight powerful actors preying on vulnerable communities or the public at large—and critically examines and ameliorates its own participation in systemic violence. As a result, instead of last-minute heroic efforts to change compromised operations, it’s been iterating all along.

Finally, ethics leaders empower their users and workers. With diverse communities and employees, they co-create new products that help improve basic needs and enable more, including the vulnerable, to increase their autonomy and their economic mobility. These entrepreneurial efforts validate new revenue streams and relationships while incubating next-generation workers who self-govern and push the company’s mission forward. Employees voice their values and diversify their relationships. Alison Taylor, the Executive Director of Ethical Systems, argues that internal processes should “improve [workers’] reasoning and creativity, instead of short-circuiting them.” Enabling this is a culture of psychological safety and training to engage kindly with divergent ideas.

These purpose-led strategies boost employee performance and retention, drive deep customer loyalty, and carve legacies.

To be clear, Apple may be implementing at least some of these strategies already—but perhaps not uniformly or transparently. For instance, Apple has implemented some provisions of the European Union’s General Data Protection Regulation for all US residents—not just EU and CA residents—including the ability to access and edit data. This expensive move, which goes beyond strict legal requirements, was implemented even without public pressure.

But ethics strategies have major limitations leaders must address

As demonstrated by the waves of ethical “principles” released by Fortune 500 companies and commissions, ethics programs can be murky, dominated by a white, male, and Western interpretation.

Furthermore, focusing purely on ethics gives companies an easy way to “free ride” off social goodwill, but ultimately stay unaccountable, given the lack of external oversight over ethics programs. When companies substitute unaccountable data ethics principles for thoughtful engagement with the enforceable data regulation principles, users will be harmed.

Long-term, without the ability to wave a $100 million fine with clear-cut requirements and lawyers trained to advocate for them internally, ethics leaders may face barriers to buy-in. Unlike their sales, marketing, or compliance counterparts, ethics programs do not directly add revenue or reduce costs. In recessions, these “soft” programs may be the first on the chopping block.

As a result of these factors, we will likely see a surge in ethics-washing: well-intentioned companies that talk ethics, but don’t walk it. More will view these efforts as PR-driven ethics stunts, which don’t deeply engage with actual ethical issues. If harmful business models do not change, ethics leaders will be fighting a losing battle….(More)”.