Two Laws On Expertise That Make Government Dumber


Beth Noveck in Forbes: “With the announcement of Microsoft’s acquisition of LinkedIn last week comes the prospect of new tech products that can help us visualize more than ever before about what we know and can do. But the buzz about what this might mean for our ability to find a job in the 21st century (and for privacy), obscures a tantalizing possibility for improving government.

Imagine if the Department of Health and Human Services needed to craft a new policy on hospitals. With better tools for automating the identification of expertise from our calendar, email, and document data (Microsoft), our education history and credentials (LinkedIn) skills acquired from training (Lynda), it might become possible to match the demand for know how about healthcare to the supply of those people who have worked in the sector, have degrees in public health, or who have demonstrated passion and know how evident from their volunteer experience.

The technological possibility of matching people to public opportunities to participate in the life of our democracy in ways that relate to our competencies and interests is impeded, however, by two decades-old statutes that prohibit the federal government from taking advantage of the possibilities of technology to tap into the expertise of the American people to solve our hardest problems.

The Federal Advisory Committee Act of 1972 (FACA) and the Paperwork Reduction Act of 1980 (PRA) entrench the committee and consultation practices of an era before the Internet. They make it illegal for wider networks of more diverse people with innovative ideas from convening to help solve public problems and need to be updated for the 21st century….(More)”

Big health data: the need to earn public trust


Tjeerd-Pieter van Staa et al in the BMJ: “Better use of large scale health data has the potential to benefit patient care, public health, and research. The handling of such data, however, raises concerns about patient privacy, even when the risks of disclosure are extremely small.

The problems are illustrated by recent English initiatives trying to aggregate and improve the accessibility of routinely collected healthcare and related records, sometimes loosely referred to as “big data.” One such initiative, care.data, was set to link and provide access to health and social care information from different settings, including primary care, to facilitate the planning and provision of healthcare and to advance health science.1 Data were to be extracted from all primary care practices in England. A related initiative, the Clinical Practice Research Datalink (CPRD), evolved from the General Practice Research Database (GPRD). CPRD was intended to build on GPRD by linking patients’ primary care records to hospital data, around 50 disease registries and clinical audits, genetic information from UK Biobank, and even the loyalty cards of a large supermarket chain, creating an integrated data repository and linked services for all of England that could be sold to universities, drug companies, and non-healthcare industries. Care.data has now been abandoned and CPRD has stalled. The flawed implementation of care.data plus earlier examples of data mismanagement have made privacy issues a mainstream public concern. We look at what went wrong and how future initiatives might gain public support….(More)”

The big health data sale


Philip Hunter at the EMBO Journal: “Personal health and medical data are a valuable commodity for a number of sectors from public health agencies to academic researchers to pharmaceutical companies. Moreover, “big data” companies are increasingly interested in tapping into this resource. One such firm is Google, whose subsidiary Deep Mind was granted access to medical records on 1.6 million patients who had been treated at some time by three major hospitals in London, UK, in order to develop a diagnostic app. The public discussion it raised was just another sign of the long‐going tensions between drug companies, privacy advocates, regulators, legislators, insurers and patients about privacy, consent, rights of access and ownership of medical data that is generated in pharmacies, hospitals and doctors’ surgeries. In addition, the rapid growth of eHealth will add a boon of even more health data from mobile phones, portable diagnostic devices and other sources.

These developments are driving efforts to create a legal framework for protecting confidentiality, controlling communication and governing access rights to data. Existing data protection and human rights laws are being modified to account for personal medical and health data in parallel to the campaign for greater transparency and access to clinical trial data. Healthcare agencies in particular will have to revise their procedures for handling medical or research data that is associated with patients.

Google’s foray into medical data demonstrates the key role of health agencies, in this case the Royal Free NHS Trust, which operates the three London hospitals that granted Deep Mind access to patient data. Royal Free approached Deep Mind with a request to develop an app for detecting acute kidney injury, which, according to the Trust, affects more than one in six inpatients….(More)”

What Governments Can Learn From Airbnb And the Sharing Economy


 in Fortune: “….Despite some regulators’ fears, the sharing economy may not result in the decline of regulation but rather in its opposite, providing a basis upon which society can develop more rational, ethical, and participatory models of regulation. But what regulation looks like, as well as who actually creates and enforce the regulation, is also bound to change.

There are three emerging models – peer regulation, self-regulatory organizations, and data-driven delegation – that promise a regulatory future for the sharing economy best aligned with society’s interests. In the adapted book excerpt that follows, I explain how the third of these approaches, of delegating enforcement of regulations to companies that store critical data on consumers, can help mitigate some of the biases Airbnb guests may face, and why this is a superior alternative to the “open data” approach of transferring consumer information to cities and state regulators.

Consider a different problem — of collecting hotel occupancy taxes from hundreds of thousands of Airbnb hosts rather than from a handful of corporate hotel chains. The delegation of tax collection to Airbnb, something a growing number of cities are experimenting with, has a number of advantages. It is likely to yield higher tax revenues and greater compliance than a system where hosts are required to register directly with the government, which is something occasional hosts seem reluctant to do. It also sidesteps privacy concerns resulting from mandates that digital platforms like Airbnb turn over detailed user data to the government. There is also significant opportunity for the platform to build credibility as it starts to take on quasi governmental roles like this.

There is yet another advantage, and the one I believe will be the most significant in the long-run. It asks a platform to leverage its data to ensure compliance with a set of laws in a manner geared towards delegating responsibility to the platform. You might say that the task in question here — computing tax owed, collecting, and remitting it—is technologically trivial. True. But I like this structure because of the potential it represents. It could be a precursor for much more exciting delegated possibilities.

For a couple of decades now, companies of different kinds have been mining the large sets of “data trails” customers provide through their digital interactions. This generates insights of business and social importance. One such effort we are all familiar with is credit card fraud detection. When an unusual pattern of activity is detected, you get a call from your bank’s security team. Sometimes your card is blocked temporarily. The enthusiasm of these digital security systems is sometimes a nuisance, but it stems from your credit card company using sophisticated machine learning techniques to identify patterns that prior experience has told it are associated with a stolen card. It saves billions of dollars in taxpayer and corporate funds by detecting and blocking fraudulent activity swiftly.

A more recent visible example of the power of mining large data sets of customer interaction came in 2008, when Google engineers announced that they could predict flu outbreaks using data collected from Google searches, and track the spread of flu outbreaks in real time, providing information that was well ahead of the information available using the Center for Disease Control’s (CDC) own tracking systems. The Google system’s performance deteriorated after a couple of years, but its impact on public perception of what might be possible using “big data” was immense.

It seems highly unlikely that such a system would have emerged if Google had been asked to hand over anonymized search data to the CDC. In fact, there would have probably been widespread public backlash to this on privacy grounds. Besides, the reason why this capability emerged organically from within Google is partly as a consequence of Google having one of the highest concentrations of computer science and machine learning talent in the world.

Similar approaches hold great promise as a regulatory approach for sharing economy platforms. Consider the issue of discriminatory practices. There has long been anecdotal evidence that some yellow cabs in New York discriminate against some nonwhite passengers. There have been similar concerns that such behavior may start to manifest on ridesharing platforms and in other peer-to-peer markets for accommodation and labor services.

For example, a 2014 study by Benjamin Edelman and Michael Luca of Harvard suggested that African American hosts might have lower pricing power than white hosts on Airbnb. While the study did not conclusively establish that the difference is due to guests discriminating against African American hosts, a follow-up study suggested that guests with “distinctively African American names” were less likely to receive favorable responses for their requests to Airbnb hosts. This research raises a red flag about the need for vigilance as the lines between personal and professional blur.

One solution would be to apply machine-learning techniques to be able to identify patterns associated with discriminatory behavior. No doubt, many platforms are already using such systems….(More)”

There aren’t any rules on how social scientists use private data. Here’s why we need them.


 at SSRC: “The politics of social science access to data are shifting rapidly in the United States as in other developed countries. It used to be that states were the most important source of data on their citizens, economy, and society. States needed to collect and aggregate large amounts of information for their own purposes. They gathered this directly—e.g., through censuses of individuals and firms—and also constructed relevant indicators. Sometimes state agencies helped to fund social science projects in data gathering, such as the National Science Foundation’s funding of the American National Election Survey over decades. While scholars such as James Scott and John Brewer disagreed about the benefits of state data gathering, they recognized the state’s primary role.

In this world, the politics of access to data were often the politics of engaging with the state. Sometimes the state was reluctant to provide information, either for ethical reasons (e.g. the privacy of its citizens) or self-interest. However, democratic states did typically provide access to standard statistical series and the like, and where they did not, scholars could bring pressure to bear on them. This led to well-understood rules about the common availability of standard data for many research questions and built the foundations for standard academic practices. It was relatively easy for scholars to criticize each other’s work when they were drawing on common sources. This had costs—scholars tended to ask the kinds of questions that readily available data allowed them to ask—but also significant benefits. In particular, it made research more easily reproducible.

We are now moving to a very different world. On the one hand, open data initiatives in government are making more data available than in the past (albeit often without much in the way of background resources or documentation).The new universe of private data is reshaping social science research in some ways that are still poorly understood. On the other, for many research purposes, large firms such as Google or Facebook (or even Apple) have much better data than the government. The new universe of private data is reshaping social science research in some ways that are still poorly understood. Here are some of the issues that we need to think about:…(More)”

Bridging data gaps for policymaking: crowdsourcing and big data for development


 for the DevPolicyBlog: “…By far the biggest innovation in data collection is the ability to access and analyse (in a meaningful way) user-generated data. This is data that is generated from forums, blogs, and social networking sites, where users purposefully contribute information and content in a public way, but also from everyday activities that inadvertently or passively provide data to those that are able to collect it.

User-generated data can help identify user views and behaviour to inform policy in a timely way rather than just relying on traditional data collection techniques (census, household surveys, stakeholder forums, focus groups, etc.), which are often cumbersome, very costly, untimely, and in many cases require some form of approval or support by government.

It might seem at first that user-generated data has limited usefulness in a development context due to the importance of the internet in generating this data combined with limited internet availability in many places. However, U-Report is one example of being able to access user-generated data independent of the internet.

U-Report was initiated by UNICEF Uganda in 2011 and is a free SMS based platform where Ugandans are able to register as “U-Reporters” and on a weekly basis give their views on topical issues (mostly related to health, education, and access to social services) or participate in opinion polls. As an example, Figure 1 shows the result from a U-Report poll on whether polio vaccinators came to U-Reporter houses to immunise all children under 5 in Uganda, broken down by districts. Presently, there are more than 300,000 U-Reporters in Uganda and more than one million U-Reporters across 24 countries that now have U-Report. As an indication of its potential impact on policymaking,UNICEF claims that every Member of Parliament in Uganda is signed up to receive U-Report statistics.

Figure 1: U-Report Uganda poll results

Figure 1: U-Report Uganda poll results

U-Report and other platforms such as Ushahidi (which supports, for example, I PAID A BRIBE, Watertracker, election monitoring, and crowdmapping) facilitate crowdsourcing of data where users contribute data for a specific purpose. In contrast, “big data” is a broader concept because the purpose of using the data is generally independent of the reasons why the data was generated in the first place.

Big data for development is a new phrase that we will probably hear a lot more (see here [pdf] and here). The United Nations Global Pulse, for example, supports a number of innovation labs which work on projects that aim to discover new ways in which data can help better decision-making. Many forms of “big data” are unstructured (free-form and text-based rather than table- or spreadsheet-based) and so a number of analytical techniques are required to make sense of the data before it can be used.

Measures of Twitter activity, for example, can be a real-time indicator of food price crises in Indonesia [pdf] (see Figure 2 below which shows the relationship between food-related tweet volume and food inflation: note that the large volume of tweets in the grey highlighted area is associated with policy debate on cutting the fuel subsidy rate) or provide a better understanding of the drivers of immunisation awareness. In these examples, researchers “text-mine” Twitter feeds by extracting tweets related to topics of interest and categorising text based on measures of sentiment (positive, negative, anger, joy, confusion, etc.) to better understand opinions and how they relate to the topic of interest. For example, Figure 3 shows the sentiment of tweets related to vaccination in Kenya over time and the dates of important vaccination related events.

Figure 2: Plot of monthly food-related tweet volume and official food price statistics

Figure 2: Plot of monthly food-related Tweet volume and official food price statistics

Figure 3: Sentiment of vaccine related tweets in Kenya

Figure 3: Sentiment of vaccine-related tweets in Kenya

Another big data example is the use of mobile phone usage to monitor the movement of populations in Senegal in 2013. The data can help to identify changes in the mobility patterns of vulnerable population groups and thereby provide an early warning system to inform humanitarian response effort.

The development of mobile banking too offers the potential for the generation of a staggering amount of data relevant for development research and informing policy decisions. However, it also highlights the public good nature of data collected by public and private sector institutions and the reliance that researchers have on them to access the data. Building trust and a reputation for being able to manage privacy and commercial issues will be a major challenge for researchers in this regard….(More)”

Priorities for the National Privacy Research Strategy


James Kurose and Keith Marzullo at the White House: “Vast improvements in computing and communications are creating new opportunities for improving life and health, eliminating barriers to education and employment, and enabling advances in many sectors of the economy. The promise of these new applications frequently comes from their ability to create, collect, process, and archive information on a massive scale.

However, the rapid increase in the quantity of personal information that is being collected and retained, combined with our increased ability to analyze and combine it with other information, is creating concerns about privacy. When information about people and their activities can be collected, analyzed, and repurposed in so many ways, it can create new opportunities for crime, discrimination, inadvertent disclosure, embarrassment, and harassment.

This Administration has been a strong champion of initiatives to improve the state of privacy, such as the “Consumer Privacy Bill of Rights” proposal and the creation of the Federal Privacy Council. Similarly, the White House report Big Data: Seizing Opportunities, Preserving Values highlights the need for large-scale privacy research, stating: “We should dramatically increase investment for research and development in privacy-enhancing technologies, encouraging cross-cutting research that involves not only computer science and mathematics, but also social science, communications and legal disciplines.”

Today, we are pleased to release the National Privacy Research Strategy. Research agencies across government participated in the development of the strategy, reviewing existing Federal research activities in privacy-enhancing technologies, soliciting inputs from the private sector, and identifying priorities for privacy research funded by the Federal Government. The National Privacy Research Strategy calls for research along a continuum of challenges, from how people understand privacy in different situations and how their privacy needs can be formally specified, to how these needs can be addressed, to how to mitigate and remediate the effects when privacy expectations are violated. This strategy proposes the following priorities for privacy research:

  • Foster a multidisciplinary approach to privacy research and solutions;
  • Understand and measure privacy desires and impacts;
  • Develop system design methods that incorporate privacy desires, requirements, and controls;
  • Increase transparency of data collection, sharing, use, and retention;
  • Assure that information flows and use are consistent with privacy rules;
  • Develop approaches for remediation and recovery; and
  • Reduce privacy risks of analytical algorithms.

With this strategy, our goal is to produce knowledge and technology that will enable individuals, commercial entities, and the Federal Government to benefit from technological advancements and data use while proactively identifying and mitigating privacy risks. Following the release of this strategy, we are also launching a Federal Privacy R&D Interagency Working Group, which will lead the coordination of the Federal Government’s privacy research efforts. Among the group’s first public activities will be to host a workshop to discuss the strategic plan and explore directions of follow-on research. It is our hope that this strategy will also inspire parallel efforts in the private sector….(More)”

Reforms to improve U.S. government accountability


Alexander B. Howard and Patrice McDermott in Science: “Five decades after the United States first enacted the Freedom of Information Act (FOIA), Congress has voted to make the first major reforms to the statute since 2007. President Lyndon Johnson signed the first FOIA on 4 July 1966, enshrining in law the public’s right to access to information from executive branch government agencies. Scientists and others around the world can use the FOIA to learn what the U.S. government has done in its policies and practices. Proposed reforms should be a net benefit to public understanding of the scientific process and knowledge, by increasing the access of scientists to archival materials and reducing the likelihood of science and scientists being suppressed by official secrecy or bureaucracy.

Although the FOIA has been important for accountability, reform is sorely needed. An analysis of the 15 federal government agencies that received the most FOIA requests found poor to abysmal compliance rates (1, 2). In 2016, the Associated Press found that the Obama Administration had set a new record for unfulfilled FOIA requests (3). Although that has to be considered in the context of a rise in request volume without commensurate increases in resources to address them, researchers have found that most agencies simply ignore routine requests for travel schedules (4). An audit of 165 federal government agencies found that only 40% complied with the E-FOIA Act of 1996; just 67 of them had online libraries that were regularly updated with a substantial number of documents released under FOIA (5).

In the face of growing concerns about compliance, FOIA reform was one of the few recent instances of bicameral bipartisanship in Congress, with both the House and Senate each passing bills this spring with broad support. Now that Congress moved to send the Senate bill on to the president to sign into law, implementation of specific provisions will bear close scrutiny, including the potential impact of disclosure upon scientists who work in or with government agencies (6). Proposed revisions to the FOIA statute would improve how government discloses information to the public, while leaving intact exemptions for privacy, proprietary information, deliberative documents, and national security.

Features of Reforms

One of the major reforms in the House and Senate bills was to codify the “presumption of openness” outlined by President Obama the day after he took office in January 2009 when he declared that FOIA should be administered with a clear presumption: In the face of doubt, “openness” would prevail. This presumption of openness was affirmed by U.S. Attorney General Holder in March 2009. Although these declarations have had limited effect in the agencies (as described above), codifying these reforms into law is crucial not only to ensure that this remains executive branch policy after this president leaves office but also to provide requesters with legal force beyond an executive order….(More)”

Privacy concerns in smart cities


Liesbet van Zoonen in Government Information Quarterly: “In this paper a framework is constructed to hypothesize if and how smart city technologies and urban big data produce privacy concerns among the people in these cities (as inhabitants, workers, visitors, and otherwise). The framework is built on the basis of two recurring dimensions in research about people’s concerns about privacy: one dimensions represents that people perceive particular data as more personal and sensitive than others, the other dimension represents that people’s privacy concerns differ according to the purpose for which data is collected, with the contrast between service and surveillance purposes most paramount. These two dimensions produce a 2 × 2 framework that hypothesizes which technologies and data-applications in smart cities are likely to raise people’s privacy concerns, distinguishing between raising hardly any concern (impersonal data, service purpose), to raising controversy (personal data, surveillance purpose). Specific examples from the city of Rotterdam are used to further explore and illustrate the academic and practical usefulness of the framework. It is argued that the general hypothesis of the framework offers clear directions for further empirical research and theory building about privacy concerns in smart cities, and that it provides a sensitizing instrument for local governments to identify the absence, presence, or emergence of privacy concerns among their citizens….(More)”

Crowdsourcing privacy policy analysis: Potential, challenges and best practices


Paper by , and : “Privacy policies are supposed to provide transparency about a service’s data practices and help consumers make informed choices about which services to entrust with their personal information. In practice, those privacy policies are typically long and complex documents that are largely ignored by consumers. Even for regulators and data protection authorities privacy policies are difficult to assess at scale. Crowdsourcing offers the potential to scale the analysis of privacy policies with microtasks, for instance by assessing how specific data practices are addressed in privacy policies or extracting information about data practices of interest, which can then facilitate further analysis or be provided to users in more effective notice formats. Crowdsourcing the analysis of complex privacy policy documents to non-expert crowdworkers poses particular challenges. We discuss best practices, lessons learned and research challenges for crowdsourcing privacy policy analysis….(More)”