New format of congressional documents creates transparency, opportunity


Colby Hochmuth in FedScoop:  “Today, in a major win for the open data community, the Government Printing Office partnered with the Library of Congress to make summaries for House bills available in XML format. Bill summaries can now be downloaded in bulk from GPO’s Federal Digital System or FDsys. This format allows the data to be repurposed and reused by third-party providers, whether for mobile apps, analytical purposes or data mashups.

“This is the result of an effective ongoing collaboration between the Library and GPO to provide legislative information in modern, widely used formats,” said Librarian of Congress James Billington. “We are pleased the popular bill summaries, which provide objective descriptions of complex legislative text, are now available through the Federal Digital System.”
The movement for more transparent government data gained traction in the House Appropriations Committee and its support of the task force on bulk data established by the House.
House bills, the Federal Register, the Code of Federal Regulations as well as various executive branch documents are currently available in XML downloadable format, but this latest development is different. These bill summaries are prepared by LOC’s Congressional Research Service and describe the key provisions of a piece of legislation, and explain the potential implications the legislation may have on current federal programs and laws.”

Open data: Strategies for impact


Havey Lewis at Open Government Partnership Blog: “When someone used to talk about “data for good”, chances are they were wondering whether the open data stream they relied on was still going to be available in the future. Similarly, “good with data” meant that experienced data scientists were being sought for a deeply technical project. Both interpretations reflect a state of being rather than of doing: data being around for good; people being good with data.
Important though these considerations are, they miss what should be an obvious and more profound alternative.
Right now, organisations like DataKind™  and Periscopic, and many other entrepreneurs, innovators and established social enterprises that use open data, see things differently. They are using these straplines to shake up the status quo, to demonstrate that data-driven businesses can do well by doing good.
And it’s the confluence of the many national and international open data initiatives, and the growing number of technically able, socially responsible organisations that provide the opportunity for social as well as economic growth. The World Wide Web Foundation now estimates that there are over 370 open data initiatives around the world. Collectively, and through portals such as Quandl and and datacatalogs.org, these initiatives have made a staggering quantity of data available – in excess of eight million data sets. In addition, several successful and data-rich companies are entering into a new spirit of philanthropy – by donating their data for the public good. There’s no doubt that opening up data signals a new willingness by governments and businesses all over the world to engage with their citizens and customers in a new and more transparent way.
The challenge, though, is ensuring that these popular national and international open data initiatives are cohesive and impactful. And that the plans drawn up by public sector bodies to release specific data sets are based on the potential the data has to achieve a beneficial outcome, not – or, at least, not solely – based on the cost or ease of publication. Despite the best of intentions, only a relatively small proportion of open data sets now available has the latent potential to create significant economic or social impact. In our push to open up data and government, it seems that we may have fallen into the trap of believing the ends are the same as the means; that effect is the same as cause…”

How Open Data Are Turned into Services?


New Paper by Muriel Foulonneau, Sébastien Martin, Slim Turki: “The Open Data movement has mainly been a data provision movement. The release of Open Data is usually motivated by (i) government transparency (citizen access to government data), (ii) the development of services by third parties for the benefit for citizens and companies (typically smart city approach), or (iii) the development of new services that stimulate the economy. The success of the Open Data movement and its return on investment should therefore be assessed among other criteria by the number and impact of the services created based on those data. In this paper, we study the development of services based on open data and means to make the data opening process more effective.”

Give the Data to the People


Harlan Krumholz in the New York Times: “LAST week, Johnson & Johnson announced that it was making all of its clinical trial data available to scientists around the world. It has hired my group, Yale University Open Data Access Project, or YODA, to fully oversee the release of the data. Everything in the company’s clinical research vaults, including unpublished raw data, will be available for independent review.

This is an extraordinary donation to society, and a reversal of the industry’s traditional tendency to treat data as an asset that would lose value if exposed to public scrutiny.

Today, more than half of the clinical trials in the United States, including many sponsored by academic and governmental institutions, are not published within two years of their completion. Often they are never published at all. The unreported results, not surprisingly, are often those in which a drug failed to perform better than a placebo. As a result, evidence-based medicine is, at best, based on only some of the evidence. One of the most troubling implications is that full information on a drug’s effects may never be discovered or released.

Even when studies are published, the actual data are usually not made available. End users of research — patients, doctors and policy makers — are implicitly told by a single group of researchers to “take our word for it.” They are often forced to accept the report without the prospect of other independent scientists’ reproducing the findings — a violation of a central tenet of the scientific method.

To be fair, the decision to share data is not easy. Companies worry that their competitors will benefit, that lawyers will take advantage, that incompetent scientists will misconstrue the data and come to mistaken conclusions. Researchers feel ownership of the data and may be reluctant to have others use it. So Johnson & Johnson, as well as companies like GlaxoSmithKline and Medtronic that have made more cautious moves toward transparency, deserve much credit. The more we share data, however, the more we find that many of these problems fail to materialize….

This program doesn’t mean that just anyone can gain access to the data without disclosing how they intend to use it. We require those who want the data to submit a proposal and identify their research team, funding and any conflicts of interest. They have to complete a short course on responsible conduct and sign an agreement that restricts them to their proposed research question. Most important, they must agree to share whatever they find. And we exclude applicants who seek data for commercial or legal purposes. Our intent is not to be tough gatekeepers, but to ensure that the data are used in a transparent way and contribute to overall scientific knowledge.

There are many benefits to this kind of sharing. It honors the contributions of the subjects and scientists who participated in the research. It is proof that an organization, whether it is part of industry or academia, wants to play a role as a good global citizen. It demonstrates that the organization has nothing to hide. And it enables scientists to use the data to learn new ways to help patients. Such an approach can even teach a company like Johnson & Johnson something it didn’t know about its own products.

For the good of society, this is a breakthrough that should be replicated throughout the research world.”

Why SayIt is (partly) a statement about the future of Open Data


Tom Steinberg from MySociety: “This is where SayIt comes in, as an example of a relatively low-cost approach to making sure that the next generation of government IT systems do produce Open Data.
SayIt is a newly launched open source tool for publishing transcripts of trials, debates, interviews and so on. It publishes them online in a way that matches modern expectations about how stuff should work on the web – responsive, searchable and so on. It’s being built as a Poplus Component, which means it’s part of an international network of groups collaborating on shared technologies. Here’s JK Rowling being interviewed, published via SayIt.
But how does this little tool relate to the business of getting governments to release more Open Data? Well, SayIt isn’t just about publishing data, it’s about making it too – in a few months we’ll be sharing an authoring interface for making new transcripts from whatever source a user has access to.
We hope that having iterated and improved this authoring interface, SayIt can become the tool of choice for public sector transcribers, replacing whatever tool they use today (almost certainly Word). Then, if they use SayIt to make a transcript, instead of Word, then it will produce new, instantly-online Open Data every time they use it….
But we can’t expect the public sector to use a tool like SayIt to make new Open Data unless it is cheaper, better and less burdensome than whatever they’re using now. We can’t – quite simply – expect to sell government procurement officers a new product mainly on the virtues of Open Data.  This means the tough task of persuading government employees that there is a new tool that is head-and-shoulders better than Excel or Word for certain purposes: formidable, familiar products that are much better than their critics like to let on.
So in order for SayIt to replace the current tools used by any current transcriber, it’s going to have to be really, really good. And really trustworthy. And it’s going to have to be well marketed. And that’s why we’ve chosen to build SayIt as an international, open source collaboration – as a Poplus Component. Because we think that without the billions of dollars it takes to compete with Microsoft, our best hope is to develop very narrow tools that do 0.01% of what Word does, but which do that one thing really really well. And our key strategic advantage, other than the trust that comes with Open Source and Open Standards, is the energy of the global civic hacking and government IT reform sector. SayIt is far more likely to succeed if it has ideas and inputs from contributors from around the world.

Regardless of whether or not SayIt ever succeeds in penetrating inside governments, this post is about an idea that such an approach represents. The idea is that people can advance the Open Data agenda not just by lobbying, but also by building and popularising tools that mean that data is born open in the first place. I hope this post will encourage more people to work on such tools, either on your own, or via collaborations like Poplus.”

Selected Readings on Personal Data: Security and Use


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of personal data was originally published in 2014.

Advances in technology have greatly increased the potential for policymakers to utilize the personal data of large populations for the public good. However, the proliferation of vast stores of useful data has also given rise to a variety of legislative, political, and ethical concerns surrounding the privacy and security of citizens’ personal information, both in terms of collection and usage. Challenges regarding the governance and regulation of personal data must be addressed in order to assuage individuals’ concerns regarding the privacy, security, and use of their personal information.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Cavoukian, Ann. “Personal Data Ecosystem (PDE) – A Privacy by Design Approach to an Individual’s Pursuit of Radical Control.” Privacy by Design, October 15, 2013. https://bit.ly/2S00Yfu.

  • In this paper, Cavoukian describes the Personal Data Ecosystem (PDE), an “emerging landscape of companies and organizations that believe individuals should be in control of their personal data, and make available a growing number of tools and technologies to enable this control.” She argues that, “The right to privacy is highly compatible with the notion of PDE because it enables the individual to have a much greater degree of control – “Radical Control” – over their personal information than is currently possible today.”
  • To ensure that the PDE reaches its privacy-protection potential, Cavouckian argues that it must practice The 7 Foundational Principles of Privacy by Design:
    • Proactive not Reactive; Preventative not Remedial
    • Privacy as the Default Setting
    • Privacy Embedded into Design
    • Full Functionality – Positive-Sum, not Zero-Sum
    • End-to-End Security – Full Lifecycle Protection
    • Visibility and Transparency – Keep it Open
    • Respect for User Privacy – Keep it User-Centric

Kirkham, T., S. Winfield, S. Ravet, and S. Kellomaki. “A Personal Data Store for an Internet of Subjects.” In 2011 International Conference on Information Society (i-Society). 92–97.  http://bit.ly/1alIGuT.

  • This paper examines various factors involved in the governance of personal data online, and argues for a shift from “current service-oriented applications where often the service provider is in control of the person’s data” to a person centric architecture where the user is at the center of personal data control.
  • The paper delves into an “Internet of Subjects” concept of Personal Data Stores, and focuses on implementation of such a concept on personal data that can be characterized as either “By Me” or “About Me.”
  • The paper also presents examples of how a Personal Data Store model could allow users to both protect and present their personal data to external applications, affording them greater control.

OECD. The 2013 OECD Privacy Guidelines. 2013. http://bit.ly/166TxHy.

  • This report is indicative of the “important role in promoting respect for privacy as a fundamental value and a condition for the free flow of personal data across borders” played by the OECD for decades. The guidelines – revised in 2013 for the first time since being drafted in 1980 – are seen as “[t]he cornerstone of OECD work on privacy.”
  • The OECD framework is built around eight basic principles for personal data privacy and security:
    • Collection Limitation
    • Data Quality
    • Purpose Specification
    • Use Limitation
    • Security Safeguards
    • Openness
    • Individual Participation
    • Accountability

Ohm, Paul. “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization.” UCLA Law Review 57, 1701 (2010). http://bit.ly/18Q5Mta.

  • This article explores the implications of the “astonishing ease” with which scientists have demonstrated the ability to “reidentify” or “deanonmize” supposedly anonymous personal information.
  • Rather than focusing exclusively on whether personal data is “anonymized,” Ohm offers five factors for governments and other data-handling bodies to use for assessing the risk of privacy harm: data-handling techniques, private versus public release, quantity, motive and trust.

Polonetsky, Jules and Omer Tene. “Privacy in the Age of Big Data: A Time for Big Decisions.” Stanford Law Review Online 64 (February 2, 2012): 63. http://bit.ly/1aeSbtG.

  • In this article, Tene and Polonetsky argue that, “The principles of privacy and data protection must be balanced against additional societal values such as public health, national security and law enforcement, environmental protection, and economic efficiency. A coherent framework would be based on a risk matrix, taking into account the value of different uses of data against the potential risks to individual autonomy and privacy.”
  • To achieve this balance, the authors believe that, “policymakers must address some of the most fundamental concepts of privacy law, including the definition of ‘personally identifiable information,’ the role of consent, and the principles of purpose limitation and data minimization.”

Shilton, Katie, Jeff Burke, Deborah Estrin, Ramesh Govindan, Mark Hansen, Jerry Kang, and Min Mun. “Designing the Personal Data Stream: Enabling Participatory Privacy in Mobile Personal Sensing”. TPRC, 2009. http://bit.ly/18gh8SN.

  • This article argues that the Codes of Fair Information Practice, which have served as a model for data privacy for decades, do not take into account a world of distributed data collection, nor the realities of data mining and easy, almost uncontrolled, dissemination.
  • The authors suggest “expanding the Codes of Fair Information Practice to protect privacy in this new data reality. An adapted understanding of the Codes of Fair Information Practice can promote individuals’ engagement with their own data, and apply not only to governments and corporations, but software developers creating the data collection programs of the 21st century.”
  • In order to achieve this change in approach, the paper discusses three foundational design principles: primacy of participants, data legibility, and engagement of participants throughout the data life cycle.

Big Data, Privacy, and the Public Good


Forthcoming book and website by Julia Lane, Victoria Stodden, Stefan Bender, and Helen Nissenbaum (editors): “The overarching goal of the book is to identify ways in which vast new sets of data on human beings can be collected, integrated, and analysed to improve evidence based decision making while protecting confidentiality. …
Massive amounts of new data on human beings can now be accessed and analyzed.  Much has been made of the many uses of such data for pragmatic purposes, including selling goods and services, winning political campaigns, and identifying possible terrorists. Yet “big data” can also be harnessed to serve the public good: scientists can use new forms of data to do research that improves the lives of human beings, federal, state and local governments can use data to improve services and reduce taxpayer costs and public organizations can use information to advocate for public causes.
Much has also been made of the privacy and confidentiality issues associated with access. A survey of statisticians at the 2013 Joint Statistical Meeting found that the majority thought consumers should worry about privacy issues, and that an ethical framework should be in place to guide data scientists.  Yet there are many unanswered questions. What are the ethical and legal requirements for scientists and government officials seeking to serve the public good without harming individual citizens?  What are the rules of engagement?  What are the best ways to provide access while protecting confidentiality? Are there reasonable mechanisms to compensate citizens for privacy loss?
The goal of this book is to answer some of these questions.  The book’s authors paint an intellectual landscape that includes the legal, economic and statistical context necessary to frame the many privacy issues, including the value to the public of data access.   The authors also identify core practical approaches that use new technologies to simultaneously maximize the utility of data access while minimizing information risk. As is appropriate for such a new and evolving field, each chapter also identifies important questions that require future research.
The work in this book is also intended to be accessible to an audience broader than the academy. In addition to informing the public, we hope that the book will be useful to people trying to provide data access but protect confidentiality in the roles as data custodians for federal, state and local agencies, or decision makers on institutional review boards.”
 

Visual Insights: A Practical Guide to Making Sense of Data


New book by Katy Börner and David E. Polley: “In the age of Big Data, the tools of information visualization offer us a macroscope to help us make sense of the avalanche of data available on every subject. This book offers a gentle introduction to the design of insightful information visualizations. It is the only book on the subject that teaches nonprogrammers how to use open code and open data to design insightful visualizations. Readers will learn to apply advanced data mining and visualization techniques to make sense of temporal, geospatial, topical, and network data.

The book, developed for use in an information visualization MOOC, covers data analysis algorithms that enable extraction of patterns and trends in data, with chapters devoted to “when” (temporal data), “where” (geospatial data), “what” (topical data), and “with whom” (networks and trees); and to systems that drive research and development. Examples of projects undertaken for clients include an interactive visualization of the success of game player activity in World of Warcraft; a visualization of 311 number adoption that shows the diffusion of non-emergency calls in the United States; a return on investment study for two decades of HIV/AIDS research funding by NIAID; and a map showing the impact of the HiveNYC Learning Network.
Visual Insights will be an essential resource on basic information visualization techniques for scholars in many fields, students, designers, or anyone who works with data.”

Check out also the Information Visualization MOOC at http://ivmooc.cns.iu.edu/
 

It’s the Neoliberalism, Stupid: Why instrumentalist arguments for Open Access, Open Data, and Open Science are not enough.


Eric Kansa at LSE Blog: “…However, I’m increasingly convinced that advocating for openness in research (or government) isn’t nearly enough. There’s been too much of an instrumentalist justification for open data an open access. Many advocates talk about how it will cut costs and speed up research and innovation. They also argue that it will make research more “reproducible” and transparent so interpretations can be better vetted by the wider community. Advocates for openness, particularly in open government, also talk about the wonderful commercial opportunities that will come from freeing research…
These are all very big policy issues, but they need to be asked if the Open Movement really stands for reform and not just a further expansion and entrenchment of Neoliberalism. I’m using the term “Neoliberalism” because it resonates as a convenient label for describing how and why so many things seem to suck in Academia. Exploding student debt, vanishing job security, increasing compensation for top administrators, expanding bureaucracy and committee work, corporate management methodologies (Taylorism), and intensified competition for ever-shrinking public funding all fall under the general rubric of Neoliberalism. Neoliberal universities primarily serve the needs of commerce. They need to churn out technically skilled human resources (made desperate for any work by high loads of debt) and easily monetized technical advancements….
“Big Data,” “Data Science,” and “Open Data” are now hot topics at universities. Investments are flowing into dedicated centers and programs to establish institutional leadership in all things related to data. I welcome the new Data Science effort at UC Berkeley to explore how to make research data professionalism fit into the academic reward systems. That sounds great! But will these new data professionals have any real autonomy in shaping how they conduct their research and build their careers? Or will they simply be part of an expanding class of harried and contingent employees- hired and fired through the whims of creative destruction fueled by the latest corporate-academic hype-cycle?
Researchers, including #AltAcs and “data professionals”, need  a large measure of freedom. Miriam Posner’s discussion about the career and autonomy limits of Alt-academic-hood help highlight these issues. Unfortunately, there’s only one area where innovation and failure seem survivable, and that’s the world of the start-up. I’ve noticed how the “Entrepreneurial Spirit” gets celebrated lots in this space. I’m guilty of basking in it myself (10 years as a quasi-independent #altAc in a nonprofit I co-founded!).
But in the current Neoliberal setting, being an entrepreneur requires a singular focus on monetizing innovation. PeerJ and Figshare are nice, since they have business models that less “evil” than Elsevier’s. But we need to stop fooling ourselves that the only institutions and programs that we can and should sustain are the ones that can turn a profit. For every PeerJ or Figshare (and these are ultimately just as dependent on continued public financing of research as any grant-driven project), we also need more innovative organizations like the Internet Archive, wholly dedicated to the public good and not the relentless pressure to commoditize everything (especially their patrons’ privacy). We need to be much more critical about the kinds of programs, organizations, and financing strategies we (as a society) can support. I raised the political economy of sustainability issue at a recent ThatCamp and hope to see more discussion.
In reality so much of the Academy’s dysfunctions are driven by our new Gilded Age’s artificial scarcity of money. With wealth concentrated in so few hands, it is very hard to finance risk taking and entreprenurialism in the scholarly community, especially to finance any form of entrepreneurialism that does not turn a profit in a year or two.
Open Access and Open Data will make so much more of a difference if we had the same kind of dynamism in the academic and nonprofit sector as we have in the for-profit start-up sector. After all, Open Access and Open Data can be key enablers to allow much broader participation in research and education. However, broader participation still needs to be financed: you cannot eat an open access publication. We cannot gloss over this key issue.
We need more diverse institutional forms so that researchers can find (or found) the kinds of organizations that best channel their passions into contributions that enrich us all. We need more diverse sources of financing (new foundations, better financed Kickstarters) to connect innovative ideas with the capital needed to see them implemented. Such institutional reforms will make life in the research community much more livable, creative, and dynamic. It would give researchers more options for diverse and varied career trajectories (for-profit or not-for-profit) suited to their interests and contributions.
Making the case to reinvest in the public good will require a long, hard slog. It will be much harder than the campaign for Open Access and Open Data because it will mean contesting Neoliberal ideologies and constituencies that are deeply entrenched in our institutions. However, the constituencies harmed by Neoliberalism, particularly the student community now burdened by over $1 trillion in debt and the middle class more generally, are much larger and very much aware that something is badly amiss. As we celebrate the impressive strides made by the Open Movement in the past year, it’s time we broaden our goals to tackle the needs for wider reform in the financing and organization of research and education.
This post originally appeared on Digging Digitally and is reposted under a CC-BY license.”

Big Data’s Dangerous New Era of Discrimination


Michael Schrage in HBR blog: “Congratulations. You bought into Big Data and it’s paying off Big Time. You slice, dice, parse and process every screen-stroke, clickstream, Like, tweet and touch point that matters to your enterprise. You now know exactly who your best — and worst — customers, clients, employees and partners are.  Knowledge is power.  But what kind of power does all that knowledge buy?
Big Data creates Big Dilemmas. Greater knowledge of customers creates new potential and power to discriminate. Big Data — and its associated analytics — dramatically increase both the dimensionality and degrees of freedom for detailed discrimination. So where, in your corporate culture and strategy, does value-added personalization and segmentation end and harmful discrimination begin?
Let’s say, for example, that your segmentation data tells you the following:
Your most profitable customers by far are single women between the ages of 34 and 55 closely followed by “happily married” women with at least one child. Divorced women are slightly more profitable than “never marrieds.” Gay males — single and in relationships — are also disproportionately profitable. The “sweet spot” is urban and 28 to 50. These segments collectively account for roughly two-thirds of your profitability.  (Unexpected factoid: Your most profitable customers are overwhelmingly Amazon Prime subscriber. What might that mean?)
Going more granular, as Big Data does, offers even sharper ethno-geographic insight into customer behavior and influence:

  • Single Asian, Hispanic, and African-American women with urban post codes are most likely to complain about product and service quality to the company. Asian and Hispanic complainers happy with resolution/refund tend to be in the top quintile of profitability. African-American women do not.
  • Suburban Caucasian mothers are most likely to use social media to share their complaints, followed closely by Asian and Hispanic mothers. But if resolved early, they’ll promote the firm’s responsiveness online.
  • Gay urban males receiving special discounts and promotions are the most effective at driving traffic to your sites.

My point here is that these data are explicit, compelling and undeniable. But how should sophisticated marketers and merchandisers use them?
Campaigns, promotions and loyalty programs targeting women and gay males seem obvious. But should Asian, Hispanic and white females enjoy preferential treatment over African-American women when resolving complaints? After all, they tend to be both more profitable and measurably more willing to effectively use social media. Does it make more marketing sense encouraging African-American female customers to become more social media savvy? Or are resources better invested in getting more from one’s best customers? Similarly, how much effort and ingenuity flow should go into making more gay male customers better social media evangelists? What kinds of offers and promotions could go viral on their networks?…
Of course, the difference between price discrimination and discrimination positively correlated with gender, ethnicity, geography, class, personality and/or technological fluency is vanishingly small. Indeed, the entire epistemological underpinning of Big Data for business is that it cost-effectively makes informed segmentation and personalization possible…..
But the main source of concern won’t be privacy, per se — it will be whether and how companies and organizations like your own use Big Data analytics to justify their segmentation/personalization/discrimination strategies. The more effective Big Data analytics are in profitably segmenting and serving customers, the more likely those algorithms will be audited by regulators or litigators.
Tomorrow’s Big Data challenge isn’t technical; it’s whether managements have algorithms and analytics that are both fairly transparent and transparently fair. Big Data champions and practitioners had better be discriminating about how discriminating they want to be.”