Using Data Sharing Agreements as Tools of Indigenous Data Governance: Current Uses and Future Options


Paper by Martinez, A. and Rainie, S. C.: “Indigenous communities and scholars have been influencing a shift in participation and inclusion in academic and agency research over the past two decades. As a response, Indigenous peoples are increasingly asking research questions and developing their own studies rooted in their cultural values. They use the study results to rebuild their communities and to protect their lands. This process of Indigenous-driven research has led to partnering with academic institutions, establishing research review boards, and entering into data sharing agreements to protect environmental data, community information, and local and traditional knowledges.

Data sharing agreements provide insight into how Indigenous nations are addressing the key areas of data collection, ownership, application, storage, and the potential for data reuse in the future. By understanding this mainstream data governance mechanism, how they have been applied, and how they have been used in the past, we aim to describe how Indigenous nations and communities negotiate data protection and control with researchers.

The project described here reviewed publicly available data sharing agreements that focus on research with Indigenous nations and communities in the United States. We utilized qualitative analysis methods to identify specific areas of focus in the data sharing agreements, whether or not traditional or cultural values were included in the language of the data sharing agreements, and how the agreements defined data. The results detail how Indigenous peoples currently use data sharing agreements and potential areas of expansion for language to include in data sharing agreements as Indigenous peoples address the research needs of their communities and the protection of community and cultural data….(More)”.

State Capability, Policymaking and the Fourth Industrial Revolution


Demos Helsinki: “The world as we know it is built on the structures of the industrial era – and these structures are falling apart. Yet the vision of a new, sustainable and fair post-industrial society remains unclear. This discussion paper is the result of a collaboration between a group of organisations interested in the implications of the rapid technological development to policymaking processes and knowledge systems that inform policy decisions.

In the discussion paper, we set out to explore what the main opportunities and concerns that accompany the Fourth Industrial Revolution for policymaking and knowledge systems are particularly in middle-income countries. Overall, middle-income countries are home to five billion of the world’s seven billion people and 73 per cent of the world’s poor people; they represent about one-third of the global Gross Domestic Product (GDP) and are major engines of global growth (World Bank 2018).

The paper is co-produced with Capability (Finland), Demos Helsinki (Finland), HELVETAS Swiss Intercooperation (Switzerland), Politics & Ideas (global), Southern Voice (global), UNESCO Montevideo (Uruguay) and Using Evidence (Canada).

The guiding questions for this paper are:

– What are the critical elements of the Fourth Industrial Revolution?

– What does the literature say about the impact of this revolution on societies and economies, and in particular on middle-income countries?

– What are the implications of the Fourth Industrial Revolution for the achievement of the Sustainable Development Goals (SDGs) in middle-income countries?

– What does the literature say about the challenges for governance and the ways knowledge can inform policy during the Fourth Industrial Revolution?…(More)”.

Full discussion paper“State Capability, Policymaking and the Fourth Industrial Revolution: Do Knowledge Systems Matter?”

The privacy threat posed by detailed census data


Gillian Tett at the Financial Times: “Wilbur Ross suffered the political equivalent of a small(ish) black eye last month: a federal judge blocked the US commerce secretary’s attempts to insert a question about citizenship into the 2020 census and accused him of committing “egregious” legal violations.

The Supreme Court has agreed to hear the administration’s appeal in April. But while this high-profile fight unfolds, there is a second, less noticed, census issue about data privacy emerging that could have big implications for businesses (and citizens). Last weekend John Abowd, the Census Bureau’s chief scientist, told an academic gathering that statisticians had uncovered shortcomings in the protection of personal data in past censuses. There is no public evidence that anyone has actually used these weaknesses to hack records, and Mr Abowd insisted that the bureau is using cutting-edge tools to fight back. But, if nothing else, this revelation shows the mounting problem around data privacy. Or, as Mr Abowd, noted: “These developments are sobering to everyone.” These flaws are “not just a challenge for statistical agencies or internet giants,” he added, but affect any institution engaged in internet commerce and “bioinformatics”, as well as commercial lenders and non-profit survey groups. Bluntly, this includes most companies and banks.

The crucial problem revolves around what is known as “re-identification” risk. When companies and government institutions amass sensitive information about individuals, they typically protect privacy in two ways: they hide the full data set from outside eyes or they release it in an “anonymous” manner, stripped of identifying details. The census bureau does both: it is required by law to publish detailed data and protect confidentiality. Since 1990, it has tried to resolve these contradictory mandates by using “household-level swapping” — moving some households from one geographic location to another to generate enough uncertainty to prevent re-identification. This used to work. But today there are so many commercially-available data sets and computers are so powerful that it is possible to re-identify “anonymous” data by combining data sets. …

Thankfully, statisticians think there is a solution. The Census Bureau now plans to use a technique known as “differential privacy” which would introduce “noise” into the public statistics, using complex algorithms. This technique is expected to create just enough statistical fog to protect personal confidentiality in published data — while also preserving information in an encrypted form that statisticians can later unscramble, as needed. Companies such as Google, Microsoft and Apple have already used variants of this technique for several years, seemingly successfully. However, nobody has employed this system on the scale that the Census Bureau needs — or in relation to such a high stakes event. And the idea has sparked some controversy because some statisticians fear that even “differential privacy” tools can be hacked — and others fret it makes data too “noisy” to be useful….(More)”.

A Parent-To-Parent Campaign To Get Vaccine Rates Up


Alex Olgin at NPR: “In 2017, Kim Nelson had just moved her family back to her hometown in South Carolina. Boxes were still scattered around the apartment, and while her two young daughters played, Nelson scrolled through a newspaper article on her phone. It said religious exemptions for vaccines had jumped nearly 70 percent in recent years in the Greenville area — the part of the state she had just moved to.

She remembers yelling to her husband in the other room, “David, you have to get in here! I can’t believe this.”

Up until that point, Nelson hadn’t run into mom friends who didn’t vaccinate….

Nelson started her own group, South Carolina Parents for Vaccines. She began posting scientific articles online. She started responding to private messages from concerned parents with specific questions. She also found that positive reinforcement was important and would roam around the mom groups, sprinkling affirmations.

“If someone posts, ‘My child got their two-months shots today,’ ” Nelson says, she’d quickly post a follow-up comment: “Great job, mom!”

Nelson was inspired by peer-focused groups around the country doing similar work. Groups with national reach like Voices for Vaccines and regional groups like Vax Northwest in Washington state take a similar approach, encouraging parents to get educated and share facts about vaccines with other parents….

Public health specialists are raising concerns about the need to improve vaccination rates. But efforts to reach vaccine-hesitant parents often fail. When presented with facts about vaccine safety, parents often remained entrenched in a decision not to vaccinate.

Pediatricians could play a role — and many do — but they’re not compensated to have lengthy discussions with parents, and some of them find it a frustrating task. That has left an opening for alternative approaches, like Nelson’s.

Nelson thought it would be best to zero in on moms who were still on the fence about vaccines.

“It’s easier to pull a hesitant parent over than it is somebody who is firmly anti-vax,” Nelson says. She explains that parents who oppose vaccination often feel so strongly about it that they won’t engage in a discussion. “They feel validated by that choice — it’s part of community, it’s part of their identity.”…(More)”.

Data Fiduciary


/ˈdeɪtə fəˈduʃiˌɛri/

A person or a business that manages individual data in a trustworthy manner. Also ‘information fiduciary’, ‘data trust’, or ‘data steward’.

‘Fiduciary’ is an old concept in the legal world. Its Latin origin is fidere, which means to trust. In the legal context, a fiduciary is usually a person that is trusted to make a decision on how to manage an asset or information, within constraints given by another person who owns such asset or information. Examples of a fiduciary relationship include homeowner and property manager, patient and doctor, or client and attorney. The latter has the ability to make decisions about the trusted asset that falls within the conditions agreed upon by the former.

Jack M. Balkin and Jonathan Zittrain wrote a case for “information fiduciary”, in which they pointed out the urgency of adopting the practice of fiduciary in the data space. In The Atlantic, they wrote:

“The information age has created new kinds of entities that have many of the trappings of fiduciaries—huge online businesses, like Facebook, Google, and Uber, that collect, analyze, and use our personal information—sometimes in our interests and sometimes not. Like older fiduciaries, these businesses have become virtually indispensable. Like older fiduciaries, these companies collect a lot of personal information that could be used to our detriment. And like older fiduciaries, these businesses enjoy a much greater ability to monitor our activities than we have to monitor theirs. As a result, many people who need these services often shrug their shoulders and decide to trust them. But the important question is whether these businesses, like older fiduciaries, have legal obligations to be trustworthy. The answer is that they should.”

Recent controversy involving Facebook data and Cambridge Analytica provides another reason for why companies collecting data from users need to act as a fiduciary. Within this framework, individuals would have a say over how and where their data can be used.

Another call for a form of data fiduciary comes from Google’s Sidewalk Labs project in Canada. After collecting data to inform urban planning in the Quayside area in Toronto, Sidewalk Labs announced that they would not be claiming ownership over the data that they collected and that the data should be “under the control of an independent Civic Data Trust.”

In a blog post, Sidewalk Labs wrote that:

“Sidewalk Labs believes an independent Civic Data Trust should become the steward of urban data collected in the physical environment. This Trust would approve and control the collection of, and manage access to, urban data originating in Quayside. The Civic Data Trust would be guided by a charter ensuring that urban data is collected and used in a way that is beneficial to the community, protects privacy, and spurs innovation and investment.”

Realizing the potential of creating new public value through an exchange of data, or data collaboratives, the GovLab “ is advancing the concept and practice of Data Stewardship to promote responsible data leadership that can address the challenges of the 21st century.” A Data Steward mirrors some of the responsibilities of a data fiduciary, in that they are “responsible for determining what, when, how and with whom to share private data for public good.”

Balkin and Zittrain suggest that there is an asymmetrical power between companies that collect user-generated data and the users themselves, in that these companies are becoming indispensable and having more control over an individual’s data. However, these companies are currently not legally obligated to be trustworthy, meaning that there is no legal consequence for when they use this data in a way that breaches privacy or is in the least interest of the customers.

Under a data fiduciary framework, individuals who are trusted with data are attached with legal rights and responsibilities regarding the use of the data. In a case where a breach of trust happens, the trustee will have to face legal consequences.

Sources and Further Readings:

Nudging Citizens through Technology in Smart Cities


Sofia Ranchordas in the International Review of Law, Computers & Technology: “In the last decade, several smart cities throughout the world have started employing Internet of Things, big data, and algorithms to nudge citizens to save more water and energy, live healthily, use public transportation, and participate more actively in local affairs. Thus far, the potential and implications of data-driven nudges and behavioral insights in smart cities have remained an overlooked subject in the legal literature. Nevertheless, combining technology with behavioral insights may allow smart cities to nudge citizens more systematically and help these urban centers achieve their sustainability goals and promote civic engagement. For example, in Boston, real-time feedback on driving has increased road safety and in Eindhoven, light sensors have been used to successfully reduce nightlife crime and disturbance. While nudging tends to be well-intended, data-driven nudges raise a number of legal and ethical issues. This article offers a novel and interdisciplinary perspective on nudging which delves into the legal, ethical, and trust implications of collecting and processing large amounts of personal and impersonal data to influence citizens’ behavior in smart cities….(More)”.

Twentieth Century Town Halls: Architecture of Democracy


Book by Jon Stewart: “This is the first book to examine the development of the town hall during the twentieth century and the way in which these civic buildings have responded to the dramatic political, social and architectural changes which took place during the period. Following an overview of the history of the town hall as a building type, it examines the key themes, variations and lessons which emerged during the twentieth century. This is followed by 20 case studies from around the world which include plans, sections and full-colour illustrations. Each of the case studies examines the town hall’s procurement, the selection of its architect and the building design, and critically analyses its success and contribution to the type’s development. The case studies include:

Copenhagen Town Hall, Denmark, Martin Nyrop

Stockholm City Hall, Sweden, Ragnar Ostberg

Hilversum Town Hall, the Netherlands, Willem M. Dudok

Walthamstow Town Hall, Britain, Philip Dalton Hepworth

Oslo Town Hall, Norway, Arnstein Arneberg and Magnus Poulsson

Casa del Fascio, Como, Italy, Guiseppe Terragni

Aarhus Town Hall, Denmark, Arne Jacobsen with Eric Moller

Saynatsalo Town Hall, Finland, Alvar Aalto

Kurashiki City Hall, Japan, Kenzo Tange

Toronto City Hall, Canada, Viljo Revell

Boston City Hall, USA, Kallmann, McKinnell and Knowles

Dallas City Hall, USA, IM Pei

Mississauga City Hall, Canada, Ed Jones and Michael Kirkland

Borgoricco Town Hall, Italy, Aldo Rossi

Reykjavik City Hall, Iceland, Studio Granda

Valdelaguna Town Hall, Spain, Victor Lopez Cotelo and Carlos Puente Fernandez

The Hague City Hall, the Netherlands, Richard Meier

Iragna Town Hall, Switzerland, Raffaele Cavadini

Murcia City Hall, Spain, Jose Rafael Moneo

London City Hall, UK, Norman Foster…(More)”.

Weather Service prepares to launch prediction model many forecasters don’t trust


Jason Samenow in the Washington Post: “In a month, the National Weather Service plans to launch its “next generation” weather prediction model with the aim of “better, more timely forecasts.” But many meteorologists familiar with the model fear it is unreliable.

The introduction of a model that forecasters lack confidence in matters, considering the enormous impact that weather has on the economy, valued at around $485 billion annually.

The Weather Service announced Wednesday that the model, known as the GFS-FV3 (FV3 stands for Finite­ Volume Cubed-Sphere dynamical core), is “tentatively” set to become the United States’ primary forecast model on March 20, pending tests. It is an update to the current version of the GFS (Global Forecast System), popularly known as the American model, which has existed in various forms for more than 30 years….

A concern is that if forecasters cannot rely on the FV3, they will be left to rely only on the European model for their predictions without a credible alternative for comparisons. And they’ll also have to pay large fees for the European model data. Whereas model data from the Weather Service is free, the European Center for Medium-Range Weather Forecasts, which produces the European model, charges for access.

But there is an alternative perspective, which is that forecasters will just need to adjust to the new model and learn to account for its biases. That is, a little short-term pain is worth the long-term potential benefits as the model improves….

The Weather Service’s parent agency, the National Oceanic and Atmospheric Administration, recently entered an agreement with the National Center for Atmospheric Research to increase collaboration between forecasters and researchers in improving forecast modeling.

In addition, President Trump recently signed into law the Weather Research and Forecast Innovation Act Reauthorization, which establishes the NOAA Earth Prediction Innovation Center, aimed at further enhancing prediction capabilities. But even while NOAA develops relationships and infrastructure to improve the Weather Service’s modeling, the question remains whether the FV3 can meet the forecasting needs of the moment. Until the problems identified are addressed, its introduction could represent a step back in U.S. weather prediction despite a well-intended effort to leap forward….(More).

Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice


Paper by Rashida Richardson, Jason Schultz, and Kate Crawford: “Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.

In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data used to train or implement these systems across thirteen case studies. We highlight three of these: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices suggests an extremely high risk that dirty data was or will be used in any predictive policing application, and (3) Maricopa County where despite extensive evidence of dirty policing practices, lack of transparency and public accountability surrounding predictive policing inhibits the public from assessing the risks of dirty data within such systems. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. Thus, for any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with skepticism and mechanisms for the public to examine and reject such systems are imperative….(More)”.

Should Libraries Be the Keepers of Their Cities’ Public Data?


Linda Poon at CityLab: “In recent years, dozens of U.S. cities have released pools of public data. It’s an effort to improve transparency and drive innovation, and done well, it can succeed at both: Governments, nonprofits, and app developers alike have eagerly gobbled up that data, hoping to improve everything from road conditions to air quality to food delivery.

But what often gets lost in the conversation is the idea of how public data should be collected, managed, and disseminated so that it serves everyone—rather than just a few residents—and so that people’s privacy and data rights are protected. That’s where librarians come in.

“As far as how private and public data should be handled, there isn’t really a strong model out there,” says Curtis Rogers, communications director for the Urban Library Council (ULC), an association of leading libraries across North America. “So to have the library as the local institution that is the most trusted, and to give them that responsibility, is a whole new paradigm for how data could be handled in a local government.”

In fact, librarians have long been advocates of digital inclusion and literacy. That’s why, last month, ULC launched a new initiative to give public libraries a leading role in a future with artificial intelligence. They kicked it off with a working group meeting in Washington, D.C., where representatives from libraries in cities like Baltimore, Toronto, Toledo, and Milwaukee met to exchange ideas on how to achieve that through education and by taking on a larger role in data governance.

It’s a broad initiative, and Rogers says they are still in the beginning stages of determining what that role will ultimately look like. But the group will discuss how data should be organized and managed, hash out the potential risks of artificial intelligence, and eventually develop a field-wide framework for how libraries can help drive equitable public data policies in cities.

Already, individual libraries are involved with their city’s data. Chattanooga Public Library (which wasn’t part of the working group, but is a member of ULC) began hosting the city’s open data portal in 2014, turning a traditionally print-centered institution into a community data hub. Since then, the portal has added more than 280 data sets and garnered hundreds of thousands of page views, according to a report for the 2018 fiscal year….

The Toronto Public Library is also in a unique position because it may soon sit inside one of North America’s “smartest” cities. Last month, the city’s board of trade published a 17-page report titled “BiblioTech,” calling for the library to oversee data governance for all smart city projects.

It’s a grand example of just how big the potential is for public libraries. Ryan says the proposal remains just that at the moment, and there are no details yet on what such a model would even look like. She adds that they were not involved in drafting the proposal, and were only asked to provide feedback. But the library is willing to entertain the idea.

Such ambitions would be a large undertaking in the U.S., however, especially for smaller libraries that are already understaffed and under-resourced. According to ULC’s survey of its members, only 23 percent of respondents said they have a staff person designated as the AI lead. A little over a quarter said they even have AI-related educational programming, and just 15 percent report being part of any local or national initiative.

Debbie Rabina, a professor of library science at Pratt Institute in New York, also cautions that putting libraries in charge of data governance has to be carefully thought out. It’s one thing for libraries to teach data literacy and privacy, and to help cities disseminate data. But to go further than that—to have libraries collecting and owning data and to have them assessing who can and can’t use the data—can lead to ethical conflicts and unintended consequences that could erode the public’s trust….(More)”.