Cass R. Sunstein and Lucia A. Reisch in the Oxford Research Encyclopedia of Climate Science (Forthcoming): “Careful attention to choice architecture promises to open up new possibilities for reducing greenhouse gas emissions – possibilities that go well beyond, and that may supplement or complement, the standard tools of economic incentives, mandates, and bans. How, for example, do consumers choose between climate-friendly products or services and alternatives that are potentially damaging to the climate but less expensive? The answer may well depend on the default rule. Indeed, climate-friendly default rules may well be a more effective tool for altering outcomes than large economic incentives. The underlying reasons include the power of suggestion; inertia and procrastination; and loss aversion. If well-chosen, climate-friendly defaults are likely to have large effects in reducing the economic and environmental harms associated with various products and activities. In deciding whether to establish climate-friendly defaults, choice architects (subject to legal constraints) should consider both consumer welfare and a wide range of other costs and benefits. Sometimes that assessment will argue strongly in favor of climate-friendly defaults, particularly when both economic and environmental considerations point in their direction. Notably, surveys in the United States and Europe show that majorities in many nations are in favor of climate-friendly defaults….(More)”
Finding Pathways to More Equitable and Meaningful Public-Scientist Partnerships
For many, citizen science is exciting because of the possibility for more diverse, equitable partnerships in scientific research with outcomes considered meaningful and useful by all, including public participants. This was the focus of a symposium we organized at the 2015 conference of the Citizen Science Association. Here we synthesize points made by symposium participants and our own reflections.
Professional science has a participation problem that is part of a larger equity problem in society. Inequity in science has negative consequences including a failure to address the needs and goals arising from diverse human and social experiences, for example, lack of attention to issues such as environmental contamination that disproportionately impact under-represented populations, and a failure to recognize the pervasive effects of structural racism. Inequity also encourages mistrust of science and scientists. A perception that science is practiced for the sole benefit of dominant social groups is reinforced when investigations of urgent community concerns such as hydraulic fracturing are questioned as being biased endeavors.
Defined broadly, citizen science can challenge and change this inequity and mistrust, but only if it reflects the diversity of publics, and if it doesn’t reinforce existing inequities in science and society. Key will be the way that science is portrayed: Acknowledging the presence of bias in all scientific research and the tools available for minimizing this, and demonstrating the utility of science for local problem solving and policy change. Symposium participants called for reflexive research, mutual learning, and other methods for supporting more equitable engagement in practice and in the activities of the Citizen Science Association…(More)”.
Is artificial intelligence key to dengue prevention?
BreakDengue: “Dengue fever outbreaks are increasing in both frequency and magnitude. Not only that, the number of countries that could potentially be affected by the disease is growing all the time.
This growth has led to renewed efforts to address the disease, and a pioneering Malaysian researcher was recently recognized for his efforts to harness the power of big data and artificial intelligence to accurately predict dengue outbreaks.
Dr. Dhesi Baha Raja received the Pistoia Alliance Life Science Award at King’s College London in April of this year, for developing a disease prediction platform that employs technology and data to give people prior warning of when disease outbreaks occur.The medical doctor and epidemiologist has spent years working to develop AIME (Artificial Intelligence in Medical Epidemiology)…
it relies on a complex algorithm, which analyses a wide range of data collected by local government and also satellite image recognition systems. Over 20 variables such as weather, wind speed, wind direction, thunderstorm, solar radiation and rainfall schedule are included and analyzed. Population models and geographical terrain are also included. The ultimate result of this intersection between epidemiology, public health and technology is a map, which clearly illustrates the probability and location of the next dengue outbreak.
The ground-breaking platform can predict dengue fever outbreaks up to two or three months in advance, with an accuracy approaching 88.7 per cent and within a 400m radius. Dr. Dhesi has just returned from Rio de Janeiro, where the platform was employed in a bid to fight dengue in advance of this summer’s Olympics. In Brazil, its perceived accuracy was around 84 per cent, whereas in Malaysia in was over 88 per cent – giving it an average accuracy of 86.37 per cent.
The web-based application has been tested in two states within Malaysia, Kuala Lumpur, and Selangor, and the first ever mobile app is due to be deployed across Malaysia soon. Once its capability is adequately tested there, it will be rolled out globally. Dr. Dhesi’s team are working closely with mobile digital service provider Webe on this.
By making the app free to download, this will ensure the service becomes accessible to all, Dr Dhesi explains.
“With the web-based application, this could only be used by public health officials and agencies. We recognized the need for us to democratize this health service to the community, and the only way to do this is to provide the community with the mobile app.”
This will also enable the gathering of even greater knowledge on the possibility of dengue outbreaks in high-risk areas, as well as monitoring the changing risks as people move to different areas, he adds….(More)”
Open access: All human knowledge is there—so why can’t everybody access it?
Glyn Moody at ArsTechnica: “In 1836, Anthony Panizzi, who later became principal librarian of the British Museum, gave evidence before a parliamentary select committee. At that time, he was only first assistant librarian, but even then he had an ambitious vision for what would one day became the British Library. He told the committee:
I want a poor student to have the same means of indulging his learned curiosity, of following his rational pursuits, of consulting the same authorities, of fathoming the most intricate inquiry as the richest man in the kingdom, as far as books go, and I contend that the government is bound to give him the most liberal and unlimited assistance in this respect.
He went some way to achieving that goal of providing general access to human knowledge. In 1856, after 20 years of labour as Keeper of Printed Books, he had helped boost the British Museum’s collection to over half a million books, making it the largest library in the world at the time. But there was a serious problem: to enjoy the benefits of those volumes, visitors needed to go to the British Museum in London.
Imagine, for a moment, if it were possible to provide access not just to those books, but to all knowledge for everyone, everywhere—the ultimate realisation of Panizzi’s dream. In fact, we don’t have to imagine: it is possible today, thanks to the combined technologies of digital texts and the Internet. The former means that we can make as many copies of a work as we want, for vanishingly small cost; the latter provides a way to provide those copies to anyone with an Internet connection. The global rise of low-cost smartphones means that group will soon include even the poorest members of society in every country.
That is to say, we have the technical means to share all knowledge, and yet we are nowhere near providing everyone with the ability to indulge their learned curiosity as Panizzi wanted it.
What’s stopping us? That’s the central question that the “open access” movement has been asking, and trying to answer, for the last two decades. Although tremendous progress has been made, with more knowledge freely available now than ever before, there are signs that open access is at a critical point in its development, which could determine whether it will ever succeed in realising Panizzi’s plan.
Table of Contents
- The arcana of academic publishing
- What about us?
- In the beginning was arXiv
- Scholarly skywriting
- Opening up the Americas
- Public Library of Science
- Open access is born
- CERN’s SCOAP
- PLoS ONE
- Gold open access
- Hybrid problems
- Green open access
- The empire strikes back
- Diamond open access
- From Aaron Swartz…
- …to Sci-Hub“
Code and the City
Code and the City explores the extent and depth of the ways in which software mediates how people work, consume, communication, travel and play. The reach of these systems is set to become even more pervasive through efforts to create smart cities: cities that employ ICTs to underpin and drive their economy and governance. Yet, despite the roll-out of software-enabled systems across all aspects of city life, the relationship between code and the city has barely been explored from a critical social science perspective. This collection of essays seeks to fill that gap, and offers an interdisciplinary examination of the relationship between software and contemporary urbanism.
This book will be of interest to those researching or studying smart cities and urban infrastructure….(More)”.
Selected Readings on Data Collaboratives
By Neil Britto, David Sangokoya, Iryna Susha, Stefaan Verhulst and Andrew Young
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data collaboratives was originally published in 2017.
The term data collaborative refers to a new form of collaboration, beyond the public-private partnership model, in which participants from different sectors (including private companies, research institutions, and government agencies ) can exchange data to help solve public problems. Several of society’s greatest challenges — from addressing climate change to public health to job creation to improving the lives of children — require greater access to data, more collaboration between public – and private-sector entities, and an increased ability to analyze datasets. In the coming months and years, data collaboratives will be essential vehicles for harnessing the vast stores of privately held data toward the public good.
Selected Reading List (in alphabetical order)
- G. Agaba, et al – Big data and Positive Social Change in the Developing World: A White Paper for Practitioners and Researchers – a white paper describing the potential of big data, and corporate data in particular, to positively benefit development efforts.
- C. Ansell and A. Gash – Collaborative Governance in Theory and Practice – a journal article describing the emerging practice of public-private partnerships, particularly those built around data sharing.
- Amparo Ballivian and Bill Hoffman – Public-Private Partnerships for Data: Issues Paper for Data Revolution Consultation – an issues paper prepared by the World Bank on financing and sustaining the post-2015 “data revolution” movement through data public-private partnerships.
- Matthew Brack and Tito Castillo – Data Sharing for Public Health: Key Lessons from Other Sectors – a Chatham House report describing the need for data sharing and collaboration for global public health emergencies and potential lessons learned from the commercial sector.
- Yves-Alexandre de Montjoye, Jake Kendall, and Cameron F. Kerry – Enabling Humanitarian Use of Mobile Phone Data – an issues paper from the Brookings Institution on leveraging the benefits of mobile phone data for humanitarian use while minimizing risks to privacy.
- Silja M. Eckartz, Wout J. Hofman, Anne Fleur Van Veenstra – A Decision Model for Data Sharing – a paper proposing a decision model for data sharing arrangements aimed at addressing identified risks and challenges.
- Harlan M. Krumholz et al. – Sea Change in Open Science and Data Sharing Leadership by Industry – a review of industry-led efforts and cross-sector collaborations to share data from clinical trials to inform clinical practice.
- Institute of Medicine (IOM) – Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk – a consensus, peer-revieed IOM report recommending how to promote responsible clinical trial data sharing and minimize risks and challenges of sharing.
- Gideon Mann – Private Data and the Public Good – the transcript of a keynote talk on the potential of leveraging corporate data to help solve public problems.
- D. Pastor Escuredo, Morales-Guzmán, A. et al – Flooding through the Lens of Mobile Phone Activity – an analysis of aggregated and anonymized call details records (CDR) conducted in collaboration with the UN, Government of Mexico, academia and Telefonica suggests high potential in using shared telecom data to improve early warning and emergency management mechanisms.
- M. Perkmann and H. Schildt – Open Data Partnerships Between Firms and Universities: The Role of Boundary Organizations – a paper highlighting the advantages of third-party organizations enabling data sharing between industry and academia to uncover new insights to benefit the public good.
- Matt Stempeck – Sharing Data Is A Form Of Corporate Philanthropy’ – a Harvard Business Review article on data philanthropy, the practice of companies donating data for public good, and its benefits and challenges.
- N. Robin, T. Klein, J. Jütting – Public-Private Partnerships for Statistics: Lessons Learned, Future Steps – a working paper describing how privately held data sources could fill current gaps in the efforts of National Statistics Offices.
- Elizabeth Stuart, Emma Samman, William Avis, and Tom Berliner –The data revolution: finding the missing millions – the Overseas Development Institute’s annual report focused on solutions toward a sustainable data revolution.
- L. Taylor and R. Schroeder – Is Bigger Better? The Emergence of Big Data as a Tool for International Development Policy – a paper describing how data, such as privately held mobile phone data – could improve development policy.
- Willem G. van Panhuis, Proma Paul, Claudia Emerson, John Grefenstette, Richard Wilder, Abraham J. Herbst, David Heymann, and Donald S. Burke – A systematic review of barriers to data sharing in public health – a literature review of potential barriers to public health data sharing.
- Stefaan Verhulst and David Sangokoya – Mapping the Next Frontier of Open Data: Corporate Data Sharing – this essay describes an emerging taxonomy of activities involving corporate data sharing for public good, an emerging trend in which companies share anonymized and aggregated data with third-party users towards data-driven policymaking and greater public good.
- Stefaan Verhulst and David Sangokoya – Data Collaboratives: Exchanging Data to Improve People’s Lives – an essay on leveraging the potential of data to solve complex public problems through data collaboratives and four critical accelerators towards responsible data sharing and collaboration.
- Stefaan Verhulst, Iryna Susha, Alexander Kostura – Data Collaboratives: matching Supply of (Corporate) Data to Solve Public Problems – a report describing emerging practice, opportunities and challenges in data collaboratives as identified at the International Data Responsibility Conference.
- F, Welle Donker, B. van Loenen, A. K. Bregt – Open Data and Beyond – a case study examining the opening of private data by Dutch energy network administrator Liander.
- World Economic Forum – Data-driven development: pathways for progress – an overview report from the World Economic Forum on the existing data deficit and the value and impact of big data for sustainable development
Annotated Selected Readings List (in alphabetical order)
Agaba, G., Akindès, F., Bengtsson, L., Cowls, J., Ganesh, M., Hoffman, N., . . . Meissner, F. “Big Data and Positive Social Change in the Developing World: A White Paper for Practitioners and Researchers.” 2014. http://bit.ly/25RRC6N.
- This white paper, produced by “a group of activists, researchers and data experts” explores the potential of big data to improve development outcomes and spur positive social change in low- and middle-income countries. Using examples, the authors discuss four areas in which the use of big data can impact development efforts:
- Advocating and facilitating by “opening[ing] up new public spaces for discussion and awareness building;
- Describing and predicting through the detection of “new correlations and the surfac[ing] of new questions;
- Facilitating information exchange through “multiple feedback loops which feed into both research and action,” and
- Promoting accountability and transparency, especially as a byproduct of crowdsourcing efforts aimed at “aggregat[ing] and analyz[ing] information in real time.
- The authors argue that in order to maximize the potential of big data’s use in development, “there is a case to be made for building a data commons for private/public data, and for setting up new and more appropriate ethical guidelines.”
- They also identify a number of challenges, especially when leveraging data made accessible from a number of sources, including private sector entities, such as:
- Lack of general data literacy;
- Lack of open learning environments and repositories;
- Lack of resources, capacity and access;
- Challenges of sensitivity and risk perception with regard to using data;
- Storage and computing capacity; and
- Externally validating data sources for comparison and verification.
Ansell, C. and Gash, A. “Collaborative Governance in Theory and Practice.” Journal of Public Administration Research and Theory 18 (4), 2008. http://bit.ly/1RZgsI5.
- This article describes collaborative arrangements that include public and private organizations working together and proposes a model for understanding an emergent form of public-private interaction informed by 137 diverse cases of collaborative governance.
- The article suggests factors significant to successful partnering processes and outcomes include:
- Shared understanding of challenges,
- Trust building processes,
- The importance of recognizing seemingly modest progress, and
- Strong indicators of commitment to the partnership’s aspirations and process.
- The authors provide a ‘’contingency theory model’’ that specifies relationships between different variables that influence outcomes of collaborative governance initiatives. Three “core contingencies’’ for successful collaborative governance initiatives identified by the authors are:
- Time (e.g., decision making time afforded to the collaboration);
- Interdependence (e.g., a high degree of interdependence can mitigate negative effects of low trust); and
- Trust (e.g. a higher level of trust indicates a higher probability of success).
Ballivian A, Hoffman W. “Public-Private Partnerships for Data: Issues Paper for Data Revolution Consultation.” World Bank, 2015. Available from: http://bit.ly/1ENvmRJ
- This World Bank report provides a background document on forming public-prviate partnerships for data with the private sector in order to inform the UN’s Independent Expert Advisory Group (IEAG) on sustaining a “data revolution” in sustainable development.
- The report highlights the critical position of private companies within the data value chain and reflects on key elements of a sustainable data PPP: “common objectives across all impacted stakeholders, alignment of incentives, and sharing of risks.” In addition, the report describes the risks and incentives of public and private actors, and the principles needed to “build[ing] the legal, cultural, technological and economic infrastructures to enable the balancing of competing interests.” These principles include understanding; experimentation; adaptability; balance; persuasion and compulsion; risk management; and governance.
- Examples of data collaboratives cited in the report include HP Earth Insights, Orange Data for Development Challenges, Amazon Web Services, IBM Smart Cities Initiative, and the Governance Lab’s Open Data 500.
Brack, Matthew, and Tito Castillo. “Data Sharing for Public Health: Key Lessons from Other Sectors.” Chatham House, Centre on Global Health Security. April 2015. Available from: http://bit.ly/1DHFGVl
- The Chatham House report provides an overview on public health surveillance data sharing, highlighting the benefits and challenges of shared health data and the complexity in adapting technical solutions from other sectors for public health.
- The report describes data sharing processes from several perspectives, including in-depth case studies of actual data sharing in practice at the individual, organizational and sector levels. Among the key lessons for public health data sharing, the report strongly highlights the need to harness momentum for action and maintain collaborative engagement: “Successful data sharing communities are highly collaborative. Collaboration holds the key to producing and abiding by community standards, and building and maintaining productive networks, and is by definition the essence of data sharing itself. Time should be invested in establishing and sustaining collaboration with all stakeholders concerned with public health surveillance data sharing.”
- Examples of data collaboratives include H3Africa (a collaboration between NIH and Wellcome Trust) and NHS England’s care.data programme.
de Montjoye, Yves-Alexandre, Jake Kendall, and Cameron F. Kerry. “Enabling Humanitarian Use of Mobile Phone Data.” The Brookings Institution, Issues in Technology Innovation. November 2014. Available from: http://brook.gs/1JxVpxp
- Using Ebola as a case study, the authors describe the value of using private telecom data for uncovering “valuable insights into understanding the spread of infectious diseases as well as strategies into micro-target outreach and driving update of health-seeking behavior.”
- The authors highlight the absence of a common legal and standards framework for “sharing mobile phone data in privacy-conscientious ways” and recommend “engaging companies, NGOs, researchers, privacy experts, and governments to agree on a set of best practices for new privacy-conscientious metadata sharing models.”
Eckartz, Silja M., Hofman, Wout J., Van Veenstra, Anne Fleur. “A decision model for data sharing.” Vol. 8653 LNCS. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2014. http://bit.ly/21cGWfw.
- This paper proposes a decision model for data sharing of public and private data based on literature review and three case studies in the logistics sector.
- The authors identify five categories of the barriers to data sharing and offer a decision model for identifying potential interventions to overcome each barrier:
- Ownership. Possible interventions likely require improving trust among those who own the data through, for example, involvement and support from higher management
- Privacy. Interventions include “anonymization by filtering of sensitive information and aggregation of data,” and access control mechanisms built around identity management and regulated access.
- Economic. Interventions include a model where data is shared only with a few trusted organizations, and yield management mechanisms to ensure negative financial consequences are avoided.
- Data quality. Interventions include identifying additional data sources that could improve the completeness of datasets, and efforts to improve metadata.
- Technical. Interventions include making data available in structured formats and publishing data according to widely agreed upon data standards.
Hoffman, Sharona and Podgurski, Andy. “The Use and Misuse of Biomedical Data: Is Bigger Really Better?” American Journal of Law & Medicine 497, 2013. http://bit.ly/1syMS7J.
- This journal articles explores the benefits and, in particular, the risks related to large-scale biomedical databases bringing together health information from a diversity of sources across sectors. Some data collaboratives examined in the piece include:
- MedMining – a company that extracts EHR data, de-identifies it, and offers it to researchers. The data sets that MedMining delivers to its customers include ‘lab results, vital signs, medications, procedures, diagnoses, lifestyle data, and detailed costs’ from inpatient and outpatient facilities.
- Explorys has formed a large healthcare database derived from financial, administrative, and medical records. It has partnered with major healthcare organizations such as the Cleveland Clinic Foundation and Summa Health System to aggregate and standardize health information from ten million patients and over thirty billion clinical events.
- Hoffman and Podgurski note that biomedical databases populated have many potential uses, with those likely to benefit including: “researchers, regulators, public health officials, commercial entities, lawyers,” as well as “healthcare providers who conduct quality assessment and improvement activities,” regulatory monitoring entities like the FDA, and “litigants in tort cases to develop evidence concerning causation and harm.”
- They argue, however, that risks arise based on:
- The data contained in biomedical databases is surprisingly likely to be incorrect or incomplete;
- Systemic biases, arising from both the nature of the data and the preconceptions of investigators are serious threats the validity of research results, especially in answering causal questions;
- Data mining of biomedical databases makes it easier for individuals with political, social, or economic agendas to generate ostensibly scientific but misleading research findings for the purpose of manipulating public opinion and swaying policymakers.
Krumholz, Harlan M., et al. “Sea Change in Open Science and Data Sharing Leadership by Industry.” Circulation: Cardiovascular Quality and Outcomes 7.4. 2014. 499-504. http://1.usa.gov/1J6q7KJ
- This article provides a comprehensive overview of industry-led efforts and cross-sector collaborations in data sharing by pharmaceutical companies to inform clinical practice.
- The article details the types of data being shared and the early activities of GlaxoSmithKline (“in coordination with other companies such as Roche and ViiV”); Medtronic and the Yale University Open Data Access Project; and Janssen Pharmaceuticals (Johnson & Johnson). The article also describes the range of involvement in data sharing among pharmaceutical companies including Pfizer, Novartis, Bayer, AbbVie, Eli Llly, AstraZeneca, and Bristol-Myers Squibb.
Mann, Gideon. “Private Data and the Public Good.” Medium. May 17, 2016. http://bit.ly/1OgOY68.
-
- This Medium post from Gideon Mann, the Head of Data Science at Bloomberg, shares his prepared remarks given at a lecture at the City College of New York. Mann argues for the potential benefits of increasing access to private sector data, both to improve research and academic inquiry and also to help solve practical, real-world problems. He also describes a number of initiatives underway at Bloomberg along these lines.
- Mann argues that data generated at private companies “could enable amazing discoveries and research,” but is often inaccessible to those who could put it to those uses. Beyond research, he notes that corporate data could, for instance, benefit:
-
- Public health – including suicide prevention, addiction counseling and mental health monitoring.
- Legal and ethical questions – especially as they relate to “the role algorithms have in decisions about our lives,” such as credit checks and resume screening.
-
- Mann recognizes the privacy challenges inherent in private sector data sharing, but argues that it is a common misconception that the only two choices are “complete privacy or complete disclosure.” He believes that flexible frameworks for differential privacy could open up new opportunities for responsibly leveraging data collaboratives.
Pastor Escuredo, D., Morales-Guzmán, A. et al, “Flooding through the Lens of Mobile Phone Activity.” IEEE Global Humanitarian Technology Conference, GHTC 2014. Available from: http://bit.ly/1OzK2bK
- This report describes the impact of using mobile data in order to understand the impact of disasters and improve disaster management. The report was conducted in the Mexican state of Tabasco in 2009 as a multidisciplinary, multi-stakeholder consortium involving the UN World Food Programme (WFP), Telefonica Research, Technical University of Madrid (UPM), Digital Strategy Coordination Office of the President of Mexico, and UN Global Pulse.
- Telefonica Research, a division of the major Latin American telecommunications company, provided call detail records covering flood-affected areas for nine months. This data was combined with “remote sensing data (satellite images), rainfall data, census and civil protection data.” The results of the data demonstrated that “analysing mobile activity during floods could be used to potentially locate damaged areas, efficiently assess needs and allocate resources (for example, sending supplies to affected areas).”
- In addition to the results, the study highlighted “the value of a public-private partnership on using mobile data to accurately indicate flooding impacts in Tabasco, thus improving early warning and crisis management.”
* Perkmann, M. and Schildt, H. “Open data partnerships between firms and universities: The role of boundary organizations.” Research Policy, 44(5), 2015. http://bit.ly/25RRJ2c.
- This paper discusses the concept of a “boundary organization” in relation to industry-academic partnerships driven by data. Boundary organizations perform mediated revealing, allowing firms to disclose their research problems to a broad audience of innovators and simultaneously minimize the risk that this information would be adversely used by competitors.
- The authors identify two especially important challenges for private firms to enter open data or participate in data collaboratives with the academic research community that could be addressed through more involvement from boundary organizations:
- First is a challenge of maintaining competitive advantage. The authors note that, “the more a firm attempts to align the efforts in an open data research programme with its R&D priorities, the more it will have to reveal about the problems it is addressing within its proprietary R&D.”
- Second, involves the misalignment of incentives between the private and academic field. Perkmann and Schildt argue that, a firm seeking to build collaborations around its opened data “will have to provide suitable incentives that are aligned with academic scientists’ desire to be rewarded for their work within their respective communities.”
Robin, N., Klein, T., & Jütting, J. “Public-Private Partnerships for Statistics: Lessons Learned, Future Steps.” OECD. 2016. http://bit.ly/24FLYlD.
- This working paper acknowledges the growing body of work on how different types of data (e.g, telecom data, social media, sensors and geospatial data, etc.) can address data gaps relevant to National Statistical Offices (NSOs).
- Four models of public-private interaction for statistics are describe: in-house production of statistics by a data-provider for a national statistics office (NSO), transfer of data-sets to NSOs from private entities, transfer of data to a third party provider to manage the NSO and private entity data, and the outsourcing of NSO functions.
- The paper highlights challenges to public-private partnerships involving data (e.g., technical challenges, data confidentiality, risks, limited incentives for participation), suggests deliberate and highly structured approaches to public-private partnerships involving data require enforceable contracts, emphasizes the trade-off between data specificity and accessibility of such data, and the importance of pricing mechanisms that reflect the capacity and capability of national statistic offices.
- Case studies referenced in the paper include:
- A mobile network operator’s (MNO Telefonica) in house analysis of call detail records;
- A third-party data provider and steward of travel statistics (Positium);
- The Data for Development (D4D) challenge organized by MNO Orange; and
- Statistics Netherlands use of social media to predict consumer confidence.
Stuart, Elizabeth, Samman, Emma, Avis, William, Berliner, Tom. “The data revolution: finding the missing millions.” Overseas Development Institute, 2015. Available from: http://bit.ly/1bPKOjw
- The authors of this report highlight the need for good quality, relevant, accessible and timely data for governments to extend services into underrepresented communities and implement policies towards a sustainable “data revolution.”
- The solutions focused on this recent report from the Overseas Development Institute focus on capacity-building activities of national statistical offices (NSOs), alternative sources of data (including shared corporate data) to address gaps, and building strong data management systems.
Taylor, L., & Schroeder, R. “Is bigger better? The emergence of big data as a tool for international development policy.” GeoJournal, 80(4). 2015. 503-518. http://bit.ly/1RZgSy4.
- This journal article describes how privately held data – namely “digital traces” of consumer activity – “are becoming seen by policymakers and researchers as a potential solution to the lack of reliable statistical data on lower-income countries.
- They focus especially on three categories of data collaborative use cases:
- Mobile data as a predictive tool for issues such as human mobility and economic activity;
- Use of mobile data to inform humanitarian response to crises; and
- Use of born-digital web data as a tool for predicting economic trends, and the implications these have for LMICs.
- They note, however, that a number of challenges and drawbacks exist for these types of use cases, including:
- Access to private data sources often must be negotiated or bought, “which potentially means substituting negotiations with corporations for those with national statistical offices;”
- The meaning of such data is not always simple or stable, and local knowledge is needed to understand how people are using the technologies in question
- Bias in proprietary data can be hard to understand and quantify;
- Lack of privacy frameworks; and
- Power asymmetries, wherein “LMIC citizens are unwittingly placed in a panopticon staffed by international researchers, with no way out and no legal recourse.”
van Panhuis, Willem G., Proma Paul, Claudia Emerson, John Grefenstette, Richard Wilder, Abraham J. Herbst, David Heymann, and Donald S. Burke. “A systematic review of barriers to data sharing in public health.” BMC public health 14, no. 1 (2014): 1144. Available from: http://bit.ly/1JOBruO
- The authors of this report provide a “systematic literature of potential barriers to public health data sharing.” These twenty potential barriers are classified in six categories: “technical, motivational, economic, political, legal and ethical.” In this taxonomy, “the first three categories are deeply rooted in well-known challenges of health information systems for which structural solutions have yet to be found; the last three have solutions that lie in an international dialogue aimed at generating consensus on policies and instruments for data sharing.”
- The authors suggest the need for a “systematic framework of barriers to data sharing in public health” in order to accelerate access and use of data for public good.
Verhulst, Stefaan and Sangokoya, David. “Mapping the Next Frontier of Open Data: Corporate Data Sharing.” In: Gasser, Urs and Zittrain, Jonathan and Faris, Robert and Heacock Jones, Rebekah, “Internet Monitor 2014: Reflections on the Digital World: Platforms, Policy, Privacy, and Public Discourse (December 15, 2014).” Berkman Center Research Publication No. 2014-17. http://bit.ly/1GC12a2
- This essay describe a taxonomy of current corporate data sharing practices for public good: research partnerships; prizes and challenges; trusted intermediaries; application programming interfaces (APIs); intelligence products; and corporate data cooperatives or pooling.
- Examples of data collaboratives include: Yelp Dataset Challenge, the Digital Ecologies Research Partnerhsip, BBVA Innova Challenge, Telecom Italia’s Big Data Challenge, NIH’s Accelerating Medicines Partnership and the White House’s Climate Data Partnerships.
- The authors highlight important questions to consider towards a more comprehensive mapping of these activities.
Verhulst, Stefaan and Sangokoya, David, 2015. “Data Collaboratives: Exchanging Data to Improve People’s Lives.” Medium. Available from: http://bit.ly/1JOBDdy
- The essay refers to data collaboratives as a new form of collaboration involving participants from different sectors exchanging data to help solve public problems. These forms of collaborations can improve people’s lives through data-driven decision-making; information exchange and coordination; and shared standards and frameworks for multi-actor, multi-sector participation.
- The essay cites four activities that are critical to accelerating data collaboratives: documenting value and measuring impact; matching public demand and corporate supply of data in a trusted way; training and convening data providers and users; experimenting and scaling existing initiatives.
- Examples of data collaboratives include NIH’s Precision Medicine Initiative; the Mobile Data, Environmental Extremes and Population (MDEEP) Project; and Twitter-MIT’s Laboratory for Social Machines.
Verhulst, Stefaan, Susha, Iryna, Kostura, Alexander. “Data Collaboratives: matching Supply of (Corporate) Data to Solve Public Problems.” Medium. February 24, 2016. http://bit.ly/1ZEp2Sr.
- This piece articulates a set of key lessons learned during a session at the International Data Responsibility Conference focused on identifying emerging practices, opportunities and challenges confronting data collaboratives.
- The authors list a number of privately held data sources that could create positive public impacts if made more accessible in a collaborative manner, including:
- Data for early warning systems to help mitigate the effects of natural disasters;
- Data to help understand human behavior as it relates to nutrition and livelihoods in developing countries;
- Data to monitor compliance with weapons treaties;
- Data to more accurately measure progress related to the UN Sustainable Development Goals.
- To the end of identifying and expanding on emerging practice in the space, the authors describe a number of current data collaborative experiments, including:
- Trusted Intermediaries: Statistics Netherlands partnered with Vodafone to analyze mobile call data records in order to better understand mobility patterns and inform urban planning.
- Prizes and Challenges: Orange Telecom, which has been a leader in this type of Data Collaboration, provided several examples of the company’s initiatives, such as the use of call data records to track the spread of malaria as well as their experience with Challenge 4 Development.
- Research partnerships: The Data for Climate Action project is an ongoing large-scale initiative incentivizing companies to share their data to help researchers answer particular scientific questions related to climate change and adaptation.
- Sharing intelligence products: JPMorgan Chase shares macro economic insights they gained leveraging their data through the newly established JPMorgan Chase Institute.
- In order to capitalize on the opportunities provided by data collaboratives, a number of needs were identified:
- A responsible data framework;
- Increased insight into different business models that may facilitate the sharing of data;
- Capacity to tap into the potential value of data;
- Transparent stock of available data supply; and
- Mapping emerging practices and models of sharing.
Vogel, N., Theisen, C., Leidig, J. P., Scripps, J., Graham, D. H., & Wolffe, G. “Mining mobile datasets to enable the fine-grained stochastic simulation of Ebola diffusion.” Paper presented at the Procedia Computer Science. 2015. http://bit.ly/1TZDroF.
- The paper presents a research study conducted on the basis of the mobile calls records shared with researchers in the framework of the Data for Development Challenge by the mobile operator Orange.
- The study discusses the data analysis approach in relation to developing a situation of Ebola diffusion built around “the interactions of multi-scale models, including viral loads (at the cellular level), disease progression (at the individual person level), disease propagation (at the workplace and family level), societal changes in migration and travel movements (at the population level), and mitigating interventions (at the abstract government policy level).”
- The authors argue that the use of their population, mobility, and simulation models provide more accurate simulation details in comparison to high-level analytical predictions and that the D4D mobile datasets provide high-resolution information useful for modeling developing regions and hard to reach locations.
Welle Donker, F., van Loenen, B., & Bregt, A. K. “Open Data and Beyond.” ISPRS International Journal of Geo-Information, 5(4). 2016. http://bit.ly/22YtugY.
- This research has developed a monitoring framework to assess the effects of open (private) data using a case study of a Dutch energy network administrator Liander.
- Focusing on the potential impacts of open private energy data – beyond ‘smart disclosure’ where citizens are given information only about their own energy usage – the authors identify three attainable strategic goals:
- Continuously optimize performance on services, security of supply, and costs;
- Improve management of energy flows and insight into energy consumption;
- Help customers save energy and switch over to renewable energy sources.
- The authors propose a seven-step framework for assessing the impacts of Liander data, in particular, and open private data more generally:
- Develop a performance framework to describe what the program is about, description of the organization’s mission and strategic goals;
- Identify the most important elements, or key performance areas which are most critical to understanding and assessing your program’s success;
- Select the most appropriate performance measures;
- Determine the gaps between what information you need and what is available;
- Develop and implement a measurement strategy to address the gaps;
- Develop a performance report which highlights what you have accomplished and what you have learned;
- Learn from your experiences and refine your approach as required.
- While the authors note that the true impacts of this open private data will likely not come into view in the short term, they argue that, “Liander has successfully demonstrated that private energy companies can release open data, and has successfully championed the other Dutch network administrators to follow suit.”
World Economic Forum, 2015. “Data-driven development: pathways for progress.” Geneva: World Economic Forum. http://bit.ly/1JOBS8u
- This report captures an overview of the existing data deficit and the value and impact of big data for sustainable development.
- The authors of the report focus on four main priorities towards a sustainable data revolution: commercial incentives and trusted agreements with public- and private-sector actors; the development of shared policy frameworks, legal protections and impact assessments; capacity building activities at the institutional, community, local and individual level; and lastly, recognizing individuals as both produces and consumers of data.
Three Things Great Data Storytellers Do Differently
Jake Porway at Stanford Social Innovation Review: “…At DataKind, we use data science and algorithms in the service of humanity, and we believe that communicating about our work using data for social impact is just as important as the work itself. There’s nothing worse than findings gathering dust in an unread report.
We also believe our projects should always start with a question. It’s clear from the questions above and others that the art of data storytelling needs some demystifying. But rather than answering each question individually, I’d like to pose a broader question that can help us get at some of the essentials: What do great data storytellers do differently and what can we learn from them?
1. They answer the most important question: So what?
Knowing how to compel your audience with data is more of an art than a science. Most people still have negative associations with numbers and statistics—unpleasant memories of boring math classes, intimidating technical concepts, or dry accounting. That’s a shame, because the message behind the numbers can be so enriching and enlightening.
The solution? Help your audience understand the “so what,” not the numbers. Ask: Why should someone care about your findings? How does this information impact them? My strong opinion is that most people actually don’t want to look at data. They need to trust that your methods are sound and that you’re reasoning from data, but ultimately they just want to know what it all means for them and what they should do next.
A great example of going straight to the “so what” is this beautiful, interactive visualization by Periscopic about gun deaths. It uses data sparingly but still evokes a very clear anti-gun message….
2. They inspire us to ask more questions.
The best data visualization helps people investigate a topic further, instead of drawing a conclusion for them or persuading them to believe something new.
For example, the nonprofit DC Action for Children was interested in leveraging publicly available data from government agencies and the US Census, as well as DC Action for Children’s own databases, to help policymakers, parents, and community members understand the conditions influencing child well-being in Washington, DC. We helped create a tool that could bring together data in a multitude of forms, and present it in a way that allowed people to delve into the topic themselves and uncover surprising truths, such as the fact that one out of every three kids in DC lives in a neighborhood without a grocery store….
3. They use rigorous analysis instead of just putting numbers on a page.
Data visualization isn’t an end goal; it’s a process. It’s often the final step in a long manufacturing chain, along which we poke, prod, and mold data to create that pretty graph.
Years ago, the New York City Department of Parks & Recreation (NYC Parks) approached us—armed with data about every single tree in the city, including when it was planted and how it was pruned—and wanted to know: Does pruning trees in one year reduce the number of hazardous tree conditions in the following year? This is one of the first things our volunteer data scientists came up with:
This is a visualization of tree density New York—and it was met with oohs and aahs. It was interactive! You could see where different types of trees lived! It was engaging! But another finding that came out of this work arguably had a greater impact. Brian D’Alessandro, one of our volunteer data scientists, used statistical modeling to help NYC Parks calculate a number: 22 percent. It turns out that if you prune trees in New York, there are 22 percent fewer emergencies on those blocks than on the blocks where you didn’t prune. This number is helping the city become more effective by understanding how to best allocate its resources, and now other urban forestry programs are asking New York how they can do the same thing. There was no sexy visualization, no interactivity—just a rigorous statistical model of the world that’s shaping how cities protect their citizens….(More)”
The trouble with Big Data? It is called the “recency bias”.
One of the problems with such a rate of information increase is that the present moment will always loom far larger than even the recent past. Imagine looking back over a photo album representing the first 18 years of your life, from birth to adulthood. Let’s say that you have two photos for your first two years. Assuming a rate of information increase matching that of the world’s data, you will have an impressive 2,000 photos representing the years six to eight; 200,000 for the years 10 to 12; and a staggering 200,000,000 for the years 16 to 18. That’s more than three photographs for every single second of those final two years.
The moment you start looking backwards to seek the longer view, you have far too much of the recent stuff and far too little of the old
This isn’t a perfect analogy with global data, of course. For a start, much of the world’s data increase is due to more sources of information being created by more people, along with far larger and more detailed formats. But the point about proportionality stands. If you were to look back over a record like the one above, or try to analyse it, the more distant past would shrivel into meaningless insignificance. How could it not, with so many times less information available?
Here’s the problem with much of the big data currently being gathered and analysed. The moment you start looking backwards to seek the longer view, you have far too much of the recent stuff and far too little of the old. Short-sightedness is built into the structure, in the form of an overwhelming tendency to over-estimate short-term trends at the expense of history.
To understand why this matters, consider the findings from social science about ‘recency bias’, which describes the tendency to assume that future events will closely resemble recent experience. It’s a version of what is also known as the availability heuristic: the tendency to base your thinking disproportionately on whatever comes most easily to mind. It’s also a universal psychological attribute. If the last few years have seen exceptionally cold summers where you live, for example, you might be tempted to state that summers are getting colder – or that your local climate may be cooling. In fact, you shouldn’t read anything whatsoever into the data. You would need to take a far, far longer view to learn anything meaningful about climate trends. In the short term, you’d be best not speculating at all – but who among us can manage that?
Short-term analyses aren’t only invalid – they’re actively unhelpful and misleading
The same tends to be true of most complex phenomena in real life: stock markets, economies, the success or failure of companies, war and peace, relationships, the rise and fall of empires. Short-term analyses aren’t only invalid – they’re actively unhelpful and misleading. Just look at the legions of economists who lined up to pronounce events like the 2009 financial crisis unthinkable right until it happened. The very notion that valid predictions could be made on that kind of scale was itself part of the problem.
It’s also worth remembering that novelty tends to be a dominant consideration when deciding what data to keep or delete. Out with the old and in with the new: that’s the digital trend in a world where search algorithms are intrinsically biased towards freshness, and where so-called link rot infests everything from Supreme Court decisions to entire social media services. A bias towards the present is structurally engrained in almost all the technology surrounding us, not least thanks to our habit of ditching most of our once-shiny machines after about five years.
What to do? This isn’t just a question of being better at preserving old data – although this wouldn’t be a bad idea, given just how little is currently able to last decades rather than years. More importantly, it’s about determining what is worth preserving in the first place – and what it means meaningfully to cull information in the name of knowledge.
What’s needed is something that I like to think of as “intelligent forgetting”: teaching our tools to become better at letting go of the immediate past in order to keep its larger continuities in view. It’s an act of curation akin to organising a photograph album – albeit with more maths….(More)”
White House Challenges Artificial Intelligence Experts to Reduce Incarceration Rates
Jason Shueh at GovTech: “The U.S. spends $270 billion on incarceration each year, has a prison population of about 2.2 million and an incarceration rate that’s spiked 220 percent since the 1980s. But with the advent of data science, White House officials are asking experts for help.
On Tuesday, June 7, the White House Office of Science and Technology Policy’s Lynn Overmann, who also leads the White House Police Data Initiative, stressed the severity of the nation’s incarceration crisis while asking a crowd of data scientists and artificial intelligence specialists for aid.
“We have built a system that is too large, and too unfair and too costly — in every sense of the word — and we need to start to change it,” Overmann said, speaking at a Computing Community Consortium public workshop.
She argued that the U.S., a country that has the highest amount incarcerated citizens in the world, is in need of systematic reforms with both data tools to process alleged offenders and at the policy level to ensure fair and measured sentences. As a longtime counselor, advisor and analyst for the Justice Department and at the city and state levels, Overman said she has studied and witnessed an alarming number of issues in terms of bias and unwarranted punishments.
For instance, she said that statistically, while drug use is about equal between African-Americans and Caucasians, African-Americans are more likely to be arrested and convicted. They also receive longer prison sentences compared to Caucasian inmates convicted of the same crimes….
Data and digital tools can help curb such pitfalls by increasing efficiency, transparency and accountability, she said.
“We think these types of data exchanges [between officials and technologists] can actually be hugely impactful if we can figure out how to take this information and operationalize it for the folks who run these systems,” Obermann noted.
The opportunities to apply artificial intelligence and data analytics, she said, might include using it to improve questions on parole screenings, using it to analyze police body camera footage, and applying it to criminal justice data for legislators and policy workers….
If the private sector is any indication, artificial intelligence and machine learning techniques could be used to interpret this new and vast supply of law enforcement data. In an earlier presentation by Eric Horvitz, the managing director at Microsoft Research, Horvitz showcased how the company has applied artificial intelligence to vision and language to interpret live video content for the blind. The app, titled SeeingAI, can translate live video footage, captured from an iPhone or a pair of smart glasses, into instant audio messages for the seeing impaired. Twitter’s live-streaming app Periscope has employed similar technology to guide users to the right content….(More)”
Digital Keywords: A Vocabulary of Information Society and Culture
Book edited by Benjamin Peters: “In the age of search, keywords increasingly organize research, teaching, and even thought itself. Inspired by Raymond Williams’s 1976 classic Keywords, the timely collection Digital Keywords gathers pointed, provocative short essays on more than two dozen keywords by leading and rising digital media scholars from the areas of anthropology, digital humanities, history, political science, philosophy, religious studies, rhetoric, science and technology studies, and sociology. Digital Keywords examines and critiques the rich lexicon animating the emerging field of digital studies.
This collection broadens our understanding of how we talk about the modern world, particularly of the vocabulary at work in information technologies. Contributors scrutinize each keyword independently: for example, the recent pairing of digital and analog is separated, while classic terms such as community, culture, event, memory, and democracy are treated in light of their historical and intellectual importance. Metaphors of the cloud in cloud computing and the mirror in data mirroring combine with recent and radical uses of terms such as information, sharing, gaming, algorithm, and internet to reveal previously hidden insights into contemporary life. Bookended by a critical introduction and a list of over two hundred other digital keywords, these essays provide concise, compelling arguments about our current mediated condition.
Digital Keywords delves into what language does in today’s information revolution and why it matters…(More)”.