EU Data Protection Rules and U.S. Implications


In Focus by the Congressional Research Service: “U.S. and European citizens are increasingly concerned about ensuring the protection of personal data, especially online. A string of high-profile data breaches at companies such as Facebook and Google have contributed to heightened public awareness. The European Union’s (EU) new General Data Protection Regulation (GDPR)—which took effect on May 25, 2018—has drawn the attention of U.S. businesses and other stakeholders, prompting debate on U.S. data privacy and protection policies.

Both the United States and the 28-member EU assert that they are committed to upholding individual privacy rights and ensuring the protection of personal data, including electronic data. However, data privacy and protection issues have long been sticking points in U.S.-EU economic and security relations, in part because of differences in U.S. and EU legal regimes and approaches to data privacy.

The GDPR highlights some of those differences and poses challenges for U.S. companies doing business in the EU. The United States does not broadly restrict cross-border data flows and has traditionally regulated privacy at a sectoral level to cover certain types of data. The EU considers the privacy of communications and the protection of personal data to be fundamental rights, which are codified in EU law. Europe’s history with fascist and totalitarian regimes informs the EU’s views on data protection and contributes to the demand for strict data privacy controls. The EU regards current U.S. data protection safeguards as inadequate; this has complicated the conclusion of U.S.-EU information-sharing agreements and raised concerns about U.S.-EU data flows….(More).

Data Trusts: Ethics, Architecture and Governance for Trustworthy Data Stewardship


Web Science Institute Paper by Kieron O’Hara: “In their report on the development of the UK AI industry, Wendy Hall and Jérôme Pesenti
recommend the establishment of data trusts, “proven and trusted frameworks and agreements” that will “ensure exchanges [of data] are secure and mutually beneficial” by promoting trust in the use of data for AI. Hall and Pesenti leave the structure of data trusts open, and the purpose of this paper is to explore the questions of (a) what existing structures can data trusts exploit, and (b) what relationship do data trusts have to
trusts as they are understood in law?

The paper defends the following thesis: A data trust works within the law to provide ethical, architectural and governance support for trustworthy data processing

Data trusts are therefore both constraining and liberating. They constrain: they respect current law, so they cannot render currently illegal actions legal. They are intended to increase trust, and so they will typically act as
further constraints on data processors, adding the constraints of trustworthiness to those of law. Yet they also liberate: if data processors
are perceived as trustworthy, they will get improved access to data.

Most work on data trusts has up to now focused on gaining and supporting the trust of data subjects in data processing. However, all actors involved in AI – data consumers, data providers and data subjects – have trust issues which data trusts need to address.

Furthermore, it is not only personal data that creates trust issues; the same may be true of any dataset whose release might involve an organisation risking competitive advantage. The paper addresses four areas….(More)”.

Harnessing the Power of Open Data for Children and Families


Article by Kathryn L.S. Pettit and Rob Pitingolo: “Child advocacy organizations, such as members of the KIDS COUNT network, have proven the value of using data to advocate for policies and programs to improve the lives of children and families. These organizations use data to educate policymakers and the public about how children are faring in their communities. They understand the importance of high-quality information for policy and decisionmaking. And in the past decade, many state governments have embraced the open data movement. Their data portals promote government transparency and increase data access for a wide range of users inside and outside government.

At the request of the Annie E. Casey Foundation, which funds the KIDS COUNT network, the authors conducted research to explore how these state data efforts could bring greater benefits to local communities. Interviews with child advocates and open data providers confirmed the opportunity for child advocacy organizations and state governments to leverage open data to improve the lives of children and families. But accomplishing this goal will require new practices on both sides.

This brief first describes the current state of practice for child advocates using data and for state governments publishing open data. It then provides suggestions for what it would take from both sides to increase the use of open data to improve the lives of children and families. Child and family advocates will find five action steps in section 2. These steps encourage them to assess their data needs, build relationships with state data managers, and advocate for new data and preservation of existing data.
State agency staff will find five action steps in section 3. These steps describe how staff can engage diverse stakeholders, including agency staff beyond typical “data people” and data users outside government. Although this brief focuses on state-level institutions, local advocates an governments will find these lessons relevant. In fact, many of the lessons and best practices are based on pioneering efforts at the local level….(More)”.

Big Data and Dahl’s Challenge of Democratic Governance


Alex Ingrams in the Review of Policy Research: “Big data applications have been acclaimed as potentially transformative for the public sector. But, despite this acclaim, most theory of big data is narrowly focused around technocratic goals. The conceptual frameworks that situate big data within democratic governance systems recognizing the role of citizens are still missing. This paper explores the democratic governance impacts of big data in three policy areas using Robert Dahl’s dimensions of control and autonomy. Key impacts and potential tensions are highlighted. There is evidence of impacts on both dimensions, but the dimensions conflict as well as align in notable ways and focused policy efforts will be needed to find a balance….(More)”.

Blockchain and distributed ledger technologies in the humanitarian sector


Report by Giulio Coppi and Larissa Fast at ODI (Overseas Development Institute): “Blockchain and the wider category of distributed ledger technologies (DLTs) promise a more transparent, accountable, efficient and secure way of exchanging decentralised stores of information that are independently updated, automatically replicated and immutable. The key components of DLTs include shared recordkeeping, multi-party consensus, independent validation, tamper evidence and tamper resistance.

Building on these claims, proponents suggest DLTs can address common problems of non-profit organisations and NGOs, such as transparency, efficiency, scale and sustainability. Current humanitarian uses of DLT, illustrated in this report, include financial inclusion, land titling, remittances, improving the transparency of donations, reducing fraud, tracking support to beneficiaries from multiple sources, transforming governance systems, micro-insurance, cross-border transfers, cash programming, grant management and organisational governance.

This report, commissioned by the Global Alliance for Humanitarian Innovation (GAHI), examines current DLT uses by the humanitarian sector to outline lessons for the project, policy and system levels. It offers recommendations to address the challenges that must be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts….(More)”.

Evolving Measurement for an Evolving Economy: Thoughts on 21st Century US Economic Statistics


Ron S. Jarmin at the Journal of Economic Perspectives: “The system of federal economic statistics developed in the 20th century has served the country well, but the current methods for collecting and disseminating these data products are unsustainable. These statistics are heavily reliant on sample surveys. Recently, however, response rates for both household and business surveys have declined, increasing costs and threatening quality. Existing statistical measures, many developed decades ago, may also miss important aspects of our rapidly evolving economy; moreover, they may not be sufficiently accurate, timely, or granular to meet the increasingly complex needs of data users. Meanwhile, the rapid proliferation of online data and more powerful computation make privacy and confidentiality protections more challenging. There is broad agreement on the need to transform government statistical agencies from the 20th century survey-centric model to a 21st century model that blends structured survey data with administrative and unstructured alternative digital data sources. In this essay, I describe some work underway that hints at what 21st century official economic measurement will look like and offer some preliminary comments on what is needed to get there….(More)”.

Privacy and Smart Cities: A Canadian Survey


Report by Sara Bannerman and Angela Orasch: “This report presents the findings of a national survey of Canadians about smart-city privacy conducted in October and November 2018. Our research questions were: How concerned are Canadians about smart-city privacy? How do these concerns intersect with age, gender, ethnicity, and location? Moreover, what are the expectations of Canadians with regards to their ability to control, use, or opt-out of data collection in smart-city context? What rights and privileges do Canadians feel are appropriate with regard to data self-determination, and what types of data are considered more sensitive than others?

What is a smart city?
A ‘smart city’ adopts digital and data-driven technologies in the planning, management and delivery of municipal services. Information and communications technologies (ICTs), data analytics, and the internet of
things (IoT) are some of the main components of these technologies, joined by web design, online marketing campaigns and digital services. Such technologies can include smart utility and transportation infrastructure, smart cards, smart transit, camera and sensor networks, or data collection by businesses to provide customized advertisements or other services. Smart-city technologies “monitor, manage and regulate city flows and processes, often in real-time” (Kitchin 2014, 2).

In 2017, a framework agreement was established between Waterfront Toronto, the organization charged with revitalizing Toronto’s waterfront, and Sidewalk Labs, parent company of Google, to develop a smart city on Toronto’s Eastern waterfront (Sidewalk Toronto 2018). This news was met with questions and concerns from experts in data privacy and the public at large regarding what was to be included in Sidewalk Lab’s smart-city vision. How would the overall governance structure function? How were the privacy rights of residents going to be protected, and what mechanisms, if any, would ensure that protection? The Toronto waterfront is just one of numerous examples of smart-city developments….(More)”.

Consumers kinda, sorta care about their data


Kim Hart at Axios: “A full 81% of consumers say that in the past year they’ve become more concerned with how companies are using their data, and 87% say they’ve come to believe companies that manage personal data should be more regulated, according to a survey out Monday by IBM’s Institute for Business Value.

Yes, but: They aren’t totally convinced they should care about how their data is being used, and many aren’t taking meaningful action after privacy breaches, according to the survey. Despite increasing data risks, 71% say it’s worth sacrificing privacy given the benefits of technology.Show less

By the numbers:

  • 89% say technology companies need to be more transparent about their products
  • 75% say that in the past year they’ve become less likely to trust companies with their personal data
  • 88% say the emergence of technologies like AI increase the need for clear policies about the use of personal data.

The other side: Despite increasing awareness of privacy and security breaches, most consumers aren’t taking consequential action to protect their personal data.

  • Fewer than half (45%) report that they’ve updated privacy settings, and only 16% stopped doing business with an entity due to data misuse….(More)”.

The Stanford Open Policing Project


About: “On a typical day in the United States, police officers make more than 50,000 traffic stops. Our team is gathering, analyzing, and releasing records from millions of traffic stops by law enforcement agencies across the country. Our goal is to help researchers, journalists, and policymakers investigate and improve interactions between police and the public.

Currently, a comprehensive, national repository detailing interactions between police and the public doesn’t exist. That’s why the Stanford Open Policing Project is collecting and standardizing data on vehicle and pedestrian stops from law enforcement departments across the country — and we’re making that information freely available. We’ve already gathered 130 million records from 31 state police agencies and have begun collecting data on stops from law enforcement agencies in major cities, as well.

We, the Stanford Open Policing Project, are an interdisciplinary team of researchers and journalists at Stanford University. We are committed to combining the academic rigor of statistical analysis with the explanatory power of data journalism….(More)”.

Algorithmic fairness: A code-based primer for public-sector data scientists


Paper by Ken Steif and Sydney Goldstein: “As the number of government algorithms grow, so does the need to evaluate algorithmic fairness. This paper has three goals. First, we ground the notion of algorithmic fairness in the context of disparate impact, arguing that for an algorithm to be fair, its predictions must generalize across different protected groups. Next, two algorithmic use cases are presented with code examples for how to evaluate fairness. Finally, we promote the concept of an open source repository of government algorithmic “scorecards,” allowing stakeholders to compare across algorithms and use cases….(More)”.