Governing Smart Data in the Public Interest: Lessons from Ontario’s Smart Metering Entity


Paper by Teresa Scassa and Merlynda Vilain: “The collection of vast quantities of personal data from embedded sensors is increasingly an aspect of urban life. This type of data collection is a feature of so-called smart cities, and it raises important questions about data governance. This is particularly the case where the data may be made available for reuse by others and for a variety of purposes.

This paper focuses on the governance of data captured through “smart” technologies and uses Ontario’s smart metering program as a case study. Ontario rolled out mandatory smart metering for electrical consumption in the early 2000s largely to meet energy conservation goals. In doing so, it designed a centralized data governance system overseen by the Smart Metering Entity to manage smart meter data and to protect consumer privacy. As interest in access to the data grew among third parties, and as new potential applications for the data emerged, the regulator sought to develop a model for data sharing that would protect privacy in relation to these new uses and that would avoid uses that might harm the public interest…(More)”.

How I Learned to Stop Worrying and Love the GDPR


Ariane Adam at DataStewards.net: “The General Data Protection Regulation (GDPR) was approved by the EU Parliament on 14 April 2016 and came into force on 25 May 2018….

The coming into force of this important regulation has created confusion and concern about penalties, particularly in the private sector….There is also apprehension about how the GDPR will affect the opening and sharing of valuable databases. At a time when open data is increasingly shaping the choices we make, from finding the fastest route home to choosing the best medical or education provider, misinformation about data protection principles leads to concerns that ‘privacy’ will be used as a smokescreen to not publish important information. Allaying the concerns of private organisations and businesses in this area is particularly important as often the datasets that most matter, and that could have the most impact if they were open, do not belong to governments.

Looking at the regulation and its effects about one year on, this paper advances a positive case for the GDPR and aims to demonstrate that a proper understanding of its underlying principles can not only assist in promoting consumer confidence and therefore business growth, but also enable organisations to safely open and share important and valuable datasets….(More)”.

Trusted data and the future of information sharing


 MIT Technology Review: “Data in some form underpins almost every action or process in today’s modern world. Consider that even farming, the world’s oldest industry, is on the verge of a digital revolution, with AI, drones, sensors, and blockchain technology promising to boost efficiencies. The market value of an apple will increasingly reflect not only traditional farming inputs but also some value of modern data, such as weather patterns, soil acidity levels and agri-supply-chain information. By 2022 more than 60% of global GDP will be digitized, according to IDC.

Governments seeking to foster growth in their digital economies need to be more active in encouraging safe data sharing between organizations. Tolerating the sharing of data and stepping in only where security breaches occur is no longer enough. Sharing data across different organizations enables the whole ecosystem to grow and can be a unique source of competitive advantage. But businesses need guidelines and support in how to do this effectively.   

This is how Singapore’s data-sharing worldview has evolved, according to Janil Puthucheary, senior minister of state for communications and information and transport, upon launching the city-state’s new Trusted Data Sharing Framework in June 2019.

The Framework, a product of consultations between Singapore’s Infocomm Media Development Authority (IMDA), its Personal Data Protection Commission (PDPC), and industry players, is intended to create a common data-sharing language for relevant stakeholders. Specifically, it addresses four common categories of concerns with data sharing: how to formulate an overall data-sharing strategy, legal and regulatory considerations, technical and organizational considerations, and the actual operationalizing of data sharing.

For instance, companies often have trouble assessing the value of their own data, a necessary first step before sharing should even be considered. The framework describes the three general approaches used: market-, cost-, and income-based. The legal and regulatory section details when businesses can, among other things, seek exemptions from Singapore’s Personal Data Protection Act.

The technical and organizational chapter includes details on governance, infrastructure security, and risk management. Finally, the section on operational aspects of data sharing includes guidelines for when it is appropriate to use shared data for a secondary purpose or not….(More)”.

How credit unions could help people make the most of personal data


Dylan Walsh at MIT Sloan: “In May of 2018, the EU adopted the General Data Protection Regulation, referred to by The New York Timesas “the world’s toughest rules to protect people’s online data.” Among its many safeguards, the GDPR gave individuals ownership of their personal data and thereby restricted its collection and use by businesses.

“That’s a good first start,” said Alex Pentland, a co-creator of the MIT Media Lab who played a foundational role in the development of the GDPR. “But ownership isn’t enough. Simply having the rights to your data doesn’t allow you to do much with it.” In response to this shortcoming, Pentland and his team have proposed the establishment of data cooperatives.

The idea is conceptually straightforward: Individuals would pool their personal data in a single institution — just as they pool money in banks — and that institution would both protect the data and put it to use. Pentland and his team suggest credit unions as one type of organization that could fill this role. And while companies would need to request permission to use consumer data, consumers themselves could request analytic insights from the cooperative. Lyft drivers, for instance, might compare their respective incomes across routes, and ride-share passengers could compare how much they pay relative to other cooperative members….

Several states have now asked credit unions to look into the idea of data cooperatives, but the model has yet to gain a foothold. “Credit unions are conservative,” Pentland said. But assuming the idea gains traction, the infrastructure won’t be difficult to build. Technology exists to automatically record and organize all the data that we give to companies; and credit unions, which have 100 million members nationwide, possess charters readymade to take on data management….(More)”.

Mobile phone data’s potential for informing infrastructure planning in developing countries


Paper by Hadrien Salat, Zbigniew Smoreda, and Markus Schläpfer: “High quality census data are not always available in developing countries. Instead, mobile phone data are becoming a go to proxy to evaluate population density, activity and social characteristics. They offer additional advantages for infrastructure planning such as being updated in real-time, including mobility information and recording temporary visitors’ activity. We combine various data sets from Senegal to evaluate mobile phone data’s potential to replace insufficient census data for infrastructure planning in developing countries. As an applied case, we test their ability at predicting accurately domestic electricity consumption. We show that, contrary to common belief, average mobile phone activity is not well correlated with population density. However, it can provide better electricity consumption estimates than basic census data. More importantly, we successfully use curve and network clustering techniques to enhance the accuracy of the predictions, to recover good population mapping potential and to reduce the collection of informative data for planning to substantially smaller samples….(More)”.

Sharing data can help prevent public health emergencies in Africa


Moses John Bockarie at The Conversation: “Global collaboration and sharing data on public health emergencies is important to fight the spread of infectious diseases. If scientists and health workers can openly share their data across regions and organisations, countries can be better prepared and respond faster to disease outbreaks.

This was the case in with the 2014 Ebola outbreak in West Africa. Close to 100 scientists, clinicians, health workers and data analysts from around the world worked together to help contain the spread of the disease.

But there’s a lack of trust when it comes to sharing data in north-south collaborations. African researchers are suspicious that their northern partners could publish data without acknowledging the input from the less resourced southern institutions where the data was first generated. Until recently, the authorship of key scientific publications, based on collaborative work in Africa, was dominated by scientists from outside Africa.

The Global Research Collaboration for Infectious Disease Preparedness, an international network of major research funding organisations, recently published a roadmap to data sharing. This may go some way to address the data sharing challenges. Members of the network are expected to encourage their grantees to be inclusive and publish their results in open access journals. The network includes major funders of research in Africa like the European Commission, Bill & Melinda Gates Foundation and Wellcome Trust.

The roadmap provides a guide on how funders can accelerate research data sharing by the scientists they fund. It recommends that research funding institutions make real-time, external data sharing a requirement. And that research needs to be part of a multi-disciplinary disease network to advance public health emergencies responses.

In addition, funding should focus on strengthening institutions’ capacity on a number of fronts. This includes data management, improving data policies, building trust and aligning tools for data sharing.

Allowing researchers to freely access data generated by global academic counterparts is critical for rapidly informing disease control strategies in public health emergencies….(More)”.

Clinical Trial Data Transparency and GDPR Compliance: Implications for Data Sharing and Open Innovation


Paper by Timo Minssen, Rajam N. and Marcel Bogers: “Recent EU initiatives and legislations have considerably increased public access to clinical trials data (CTD). These developments are generally much welcomed for the enhancement of science, trust, and open innovation. However, they also raise many questions and concerns, not least at the interface between CTD transparency and other areas of evolving EU law on the protection of trade secrets, intellectual property rights and privacy.

This paper focuses on privacy issues and on the interrelation between developments in transparency and the EU’s new General Data Protection Regulation 2016/679 (GDPR). More specifically, this paper examines: (1) the genesis of EU transparency regulations, including the incidents, developments and policy concerns that have shaped them; (2) the features and implications of the GDPR which are relevant in the context of clinical trials; and (3) the risk for tensions between the GDPR and the policy goals of CTD transparency, including their implications for data sharing and open innovation. Ultimately, we stress that these and other related factors must be carefully considered and addressed to reap the full benefits of CTD transparency….(More)”.

We Need a Data-Rich Picture of What’s Killing the Planet


Clive Thompson at Wired: “…Marine litter isn’t the only hazard whose contours we can’t fully see. The United Nations has 93 indicators to measure the environmental dimensions of “sustainable development,” and amazingly, the UN found that we have little to no data on 68 percent of them—like how rapidly land is being degraded, the rate of ocean acidification, or the trade in poached wildlife. Sometimes this is because we haven’t collected it; in other cases some data exists but hasn’t been shared globally, or it’s in a myriad of incompatible formats. No matter what, we’re flying blind. “And you can’t manage something if you can’t measure it,” says David Jensen, the UN’s head of environmental peacebuilding.

In other words, if we’re going to help the planet heal and adapt, we need a data revolution. We need to build a “digital eco­system for the environment,” as Jensen puts it.

The good news is that we’ve got the tools. If there’s one thing tech excels at (for good and ill), it’s surveillance, right? We live in a world filled with cameras and pocket computers, titanic cloud computing, and the eerily sharp insights of machine learning. And this stuff can be used for something truly worthwhile: studying the planet.

There are already some remarkable cases of tech helping to break through the fog. Consider Global Fishing Watch, a nonprofit that tracks the world’s fishing vessels, looking for overfishing. They use everything from GPS-like signals emitted by ships to satellite infrared imaging of ship lighting, plugged into neural networks. (It’s massive, cloud-scale data: over 60 million data points per day, making the AI more than 90 percent accurate at classifying what type of fishing activity a boat is engaged in.)

“If a vessel is spending its time in an area that has little tuna and a lot of sharks, that’s questionable,” says Brian Sullivan, cofounder of the project and a senior program manager at Google Earth Outreach. Crucially, Global Fishing Watch makes its data open to anyone­­­—so now the National Geographic Society is using it to lobby for new marine preserves, and governments and nonprofits use it to target illicit fishing.

If we want better environmental data, we’ll need for-profit companies with the expertise and high-end sensors to pitch in too. Planet, a firm with an array of 140 satellites, takes daily snapshots of the entire Earth. Customers like insurance and financial firms love that sort of data. (It helps them understand weather and climate risk.) But Planet also offers it to services like Global Forest Watch, which maps deforestation and makes the information available to anyone (like activists who help bust illegal loggers). Meanwhile, Google’s skill in cloud-based data crunching helps illuminate the state of surface water: Google digitized 30 years of measurements from around the globe—extracting some from ancient magnetic tapes—then created an easy-to-use online tool that lets resource-poor countries figure out where their water needs protecting….(More)”.

Access to Data in Connected Cars and the Recent Reform of the Motor Vehicle Type Approval Regulation


Paper by Wolfgang Kerber and Daniel Moeller: “The need for regulatory solutions for access to in-vehicle data and resources of connected cars is one of the big controversial and unsolved policy issues. Last year the EU revised the Motor Vehicle Type Approval Regulation which already entailed a FRAND-like solution for the access to repair and maintenance information (RMI) to protect competition on the automotive aftermarkets. However, the transition to connected cars changes the technological conditions for this regulatory solution significantly. This paper analyzes the reform of the type approval regulation and shows that the regulatory solutions for access to RMI are so far only very insufficiently capable of dealing with the challenges coming along with increased connectivity, e.g. with regard to the new remote diagnostic, repair and maintenance services. Therefore, an important result of the paper is that the transition to connected cars will require a further reform of the rules for the regulated access to RMI (esp. with regard to data access, interoperability, and safety/security issues). However, our analysis also suggests that the basic approach of the current regulated access regime for RMI in the type approval regulation can also be a model for developing general solutions for the currently unsolved problems of access to in-vehicle data and resources in the ecosystem of connected driving….(More)”.

The Education Data Collaborative: A new kind of partnership.


About: “Whether we work within schools or as part of the broader ecosystem of parent-teacher associations, and philanthropic, nonprofit, and volunteer organizations, we need data to guide decisions about investing our time and resources.

This data is typically expensive to gather, often unvalidated (e.g. self-reported), and commonly available only to those who collect or report it. It can even be hard to ask for data when it’s not clear what’s available. At the same time, information – in the form of discrete research, report-card style PDFs, or static websites – is everywhere. The result is that many already resource-thin organizations that could be collaborating around strategies to help kids advance, spend a lot of time in isolation collecting and searching for data.

In the past decade, we’ve seen solid progress in addressing part of the problem: the emergence of connected longitudinal data systems (LDS). These warehouses and  linked databases contain data that can help us understand how students progress over time. No personally identifiable information (or PII) is shared, yet the data can reveal where interventions are most needed. Because these systems are typically designed for researchers and policy professionals, they are rarely accessible to the educators, parents, and partners – arts, sports, academic enrichment (e.g. STEM), mentoring, and family support programs – that play such important roles in helping young people learn and succeed…

“We need open tools for the ecosystem – parents, volunteers, non-profit organizations and the foundations and agencies that support them. These partners can realize significant benefit from the same kind of data policy makers and education leaders hold in their LDS.


That’s why we’re launching the Education Data Collaborative. Working together, we can build tools that help us use data to improve the design, efficacy, and impact of programs and interventions and find new  way to work with public education systems to achieve great things for kids. …Data collaboratives, data trusts, and other kinds of multi-sector data partnerships are among the most important civic innovations to emerge in the past decade….(More)”