Paper by Florian Schaub, Travis D. Breaux, and Norman Sadeh: “Privacy policies are supposed to provide transparency about a service’s data practices and help consumers make informed choices about which services to entrust with their personal information. In practice, those privacy policies are typically long and complex documents that are largely ignored by consumers. Even for regulators and data protection authorities privacy policies are difficult to assess at scale. Crowdsourcing offers the potential to scale the analysis of privacy policies with microtasks, for instance by assessing how specific data practices are addressed in privacy policies or extracting information about data practices of interest, which can then facilitate further analysis or be provided to users in more effective notice formats. Crowdsourcing the analysis of complex privacy policy documents to non-expert crowdworkers poses particular challenges. We discuss best practices, lessons learned and research challenges for crowdsourcing privacy policy analysis….(More)”
Directory of crowdsourcing websites
Directory by Donelle McKinley: “…Here is just a selection of websites for crowdsourcing cultural heritage. Websites are actively crowdsourcing unless indicated with an asterisk…The directory is organized by the type of crowdsourcing process involved, using the typology for crowdsourcing in the humanities developed by Dunn & Hedges (2012). In their study they explain that, “a process is a sequence of tasks, through which an output is produced by operating on an asset”. For example, the Your Paintings Tagger website is for the process of tagging, which is an editorial task. The assets being tagged are images, and the output of the project is metadata, which makes the images easier to discover, retrieve and curate.
Transcription
Alexander Research Library, Wanganui Library * (NZ) Transcription of index cards from 1840 to 2002.
Ancient Lives*, University of Oxford (UK) Transcription of papyri from Greco-Roman Egypt.
AnnoTate, Tate Britain (UK) Transcription of artists’ diaries, letters and sketchbooks.
Decoding the Civil War, The Huntington Library, Abraham Lincoln Presidential Library and Museum & North Carolina State University (USA). Transcription and decoding of Civil War telegrams from the Thomas T. Eckert Papers.
DIY History, University of Iowa Libraries (USA) Transcription of historical documents.
Emigrant City, New York Public Library (USA) Transcription of handwritten mortgage and bond ledgers from the Emigrant Savings Bank records.
Field Notes of Laurence M. Klauber, San Diego Natural History Museum (USA) Transcription of field notes by the celebrated herpetologist.
Notes from Nature Transcription of natural history museum records.
Measuring the ANZACs, Archives New Zealand and Auckland War Memorial Museum (NZ). Transcription of first-hand accounts of NZ soldiers in WW1.
Old Weather (UK) Transcription of Royal Navy ships logs from the early twentieth century.
Scattered Seeds, Heritage Collections, Dunedin Public Libraries (NZ) Transcription of index cards for Dunedin newspapers 1851-1993
Shakespeare’s World, Folger Shakespeare Library (USA) & Oxford University Press (UK). Transcription of handwritten documents by Shakespeare’s contemporaries. Identification of words that have yet to be recorded in the authoritative Oxford English Dictionary.
Smithsonian Digital Volunteers Transcription Center (USA) Transcription of multiple collections.
Transcribe Bentham, University College London (UK) Transcription of historical manuscripts by philosopher and reformer Jeremy Bentham,
What’s on the menu? New York Public Library (USA) Transcription of historical restaurant menus. …
In Your Neighborhood, Who Draws the Map?
Lizzie MacWillie at NextCity: “…By crowdsourcing neighborhood boundaries, residents can put themselves on the map in critical ways.
Why does this matter? Neighborhoods are the smallest organizing element in any city. A strong city is made up of strong neighborhoods, where the residents can effectively advocate for their needs. A neighborhood boundary marks off a particular geography and calls out important elements within that geography: architecture, street fabric, public spaces and natural resources, to name a few. Putting that line on a page lets residents begin to identify needs and set priorities. Without boundaries, there’s no way to know where to start.
Knowing a neighborhood’s boundaries and unique features allows a group to list its assets. What buildings have historic significance? What shops and restaurants exist? It also helps highlight gaps: What’s missing? What does the neighborhood need more of? What is there already too much of? Armed with this detailed inventory, residents can approach a developer, city council member or advocacy group with hard numbers on what they know their neighborhood needs.
With a precisely defined geography, residents living in a food desert can point to developable vacant land that’s ideal for a grocery store. They can also cite how many potential grocery shoppers live within the neighborhood.
In addition to being able to organize within the neighborhood, staking a claim to a neighborhood, putting it on a map and naming it, can help a neighborhood control its own narrative and tell its story — so someone else doesn’t.
Our neighborhood map project was started in part as a response to consistent misidentification of Dallas neighborhoods by local media, which appears to be particularly common in stories about majority-minority neighborhoods. This kind of oversight can contribute to a false narrative about a place, especially when the news is about crime or violence, and takes away from residents’ ability to tell their story and shape their neighborhood’s future. Even worse is when neighborhoods are completely left off of the map, as if they have no story at all to tell.
Cities across the country — including Dallas, Boston, New York, Chicago,Portland and Seattle — have crowdsourced mapping projects people can contribute to. For cities lacking such an effort, tools like Google Map Maker have been effective….(More)”.
Selected Readings on Data Collaboratives
By Neil Britto, David Sangokoya, Iryna Susha, Stefaan Verhulst and Andrew Young
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data collaboratives was originally published in 2017.
The term data collaborative refers to a new form of collaboration, beyond the public-private partnership model, in which participants from different sectors (including private companies, research institutions, and government agencies ) can exchange data to help solve public problems. Several of society’s greatest challenges — from addressing climate change to public health to job creation to improving the lives of children — require greater access to data, more collaboration between public – and private-sector entities, and an increased ability to analyze datasets. In the coming months and years, data collaboratives will be essential vehicles for harnessing the vast stores of privately held data toward the public good.
Selected Reading List (in alphabetical order)
- G. Agaba, et al – Big data and Positive Social Change in the Developing World: A White Paper for Practitioners and Researchers – a white paper describing the potential of big data, and corporate data in particular, to positively benefit development efforts.
- C. Ansell and A. Gash – Collaborative Governance in Theory and Practice – a journal article describing the emerging practice of public-private partnerships, particularly those built around data sharing.
- Amparo Ballivian and Bill Hoffman – Public-Private Partnerships for Data: Issues Paper for Data Revolution Consultation – an issues paper prepared by the World Bank on financing and sustaining the post-2015 “data revolution” movement through data public-private partnerships.
- Matthew Brack and Tito Castillo – Data Sharing for Public Health: Key Lessons from Other Sectors – a Chatham House report describing the need for data sharing and collaboration for global public health emergencies and potential lessons learned from the commercial sector.
- Yves-Alexandre de Montjoye, Jake Kendall, and Cameron F. Kerry – Enabling Humanitarian Use of Mobile Phone Data – an issues paper from the Brookings Institution on leveraging the benefits of mobile phone data for humanitarian use while minimizing risks to privacy.
- Silja M. Eckartz, Wout J. Hofman, Anne Fleur Van Veenstra – A Decision Model for Data Sharing – a paper proposing a decision model for data sharing arrangements aimed at addressing identified risks and challenges.
- Harlan M. Krumholz et al. – Sea Change in Open Science and Data Sharing Leadership by Industry – a review of industry-led efforts and cross-sector collaborations to share data from clinical trials to inform clinical practice.
- Institute of Medicine (IOM) – Sharing Clinical Trial Data: Maximizing Benefits, Minimizing Risk – a consensus, peer-revieed IOM report recommending how to promote responsible clinical trial data sharing and minimize risks and challenges of sharing.
- Gideon Mann – Private Data and the Public Good – the transcript of a keynote talk on the potential of leveraging corporate data to help solve public problems.
- D. Pastor Escuredo, Morales-Guzmán, A. et al – Flooding through the Lens of Mobile Phone Activity – an analysis of aggregated and anonymized call details records (CDR) conducted in collaboration with the UN, Government of Mexico, academia and Telefonica suggests high potential in using shared telecom data to improve early warning and emergency management mechanisms.
- M. Perkmann and H. Schildt – Open Data Partnerships Between Firms and Universities: The Role of Boundary Organizations – a paper highlighting the advantages of third-party organizations enabling data sharing between industry and academia to uncover new insights to benefit the public good.
- Matt Stempeck – Sharing Data Is A Form Of Corporate Philanthropy’ – a Harvard Business Review article on data philanthropy, the practice of companies donating data for public good, and its benefits and challenges.
- N. Robin, T. Klein, J. Jütting – Public-Private Partnerships for Statistics: Lessons Learned, Future Steps – a working paper describing how privately held data sources could fill current gaps in the efforts of National Statistics Offices.
- Elizabeth Stuart, Emma Samman, William Avis, and Tom Berliner –The data revolution: finding the missing millions – the Overseas Development Institute’s annual report focused on solutions toward a sustainable data revolution.
- L. Taylor and R. Schroeder – Is Bigger Better? The Emergence of Big Data as a Tool for International Development Policy – a paper describing how data, such as privately held mobile phone data – could improve development policy.
- Willem G. van Panhuis, Proma Paul, Claudia Emerson, John Grefenstette, Richard Wilder, Abraham J. Herbst, David Heymann, and Donald S. Burke – A systematic review of barriers to data sharing in public health – a literature review of potential barriers to public health data sharing.
- Stefaan Verhulst and David Sangokoya – Mapping the Next Frontier of Open Data: Corporate Data Sharing – this essay describes an emerging taxonomy of activities involving corporate data sharing for public good, an emerging trend in which companies share anonymized and aggregated data with third-party users towards data-driven policymaking and greater public good.
- Stefaan Verhulst and David Sangokoya – Data Collaboratives: Exchanging Data to Improve People’s Lives – an essay on leveraging the potential of data to solve complex public problems through data collaboratives and four critical accelerators towards responsible data sharing and collaboration.
- Stefaan Verhulst, Iryna Susha, Alexander Kostura – Data Collaboratives: matching Supply of (Corporate) Data to Solve Public Problems – a report describing emerging practice, opportunities and challenges in data collaboratives as identified at the International Data Responsibility Conference.
- F, Welle Donker, B. van Loenen, A. K. Bregt – Open Data and Beyond – a case study examining the opening of private data by Dutch energy network administrator Liander.
- World Economic Forum – Data-driven development: pathways for progress – an overview report from the World Economic Forum on the existing data deficit and the value and impact of big data for sustainable development
Annotated Selected Readings List (in alphabetical order)
Agaba, G., Akindès, F., Bengtsson, L., Cowls, J., Ganesh, M., Hoffman, N., . . . Meissner, F. “Big Data and Positive Social Change in the Developing World: A White Paper for Practitioners and Researchers.” 2014. http://bit.ly/25RRC6N.
- This white paper, produced by “a group of activists, researchers and data experts” explores the potential of big data to improve development outcomes and spur positive social change in low- and middle-income countries. Using examples, the authors discuss four areas in which the use of big data can impact development efforts:
- Advocating and facilitating by “opening[ing] up new public spaces for discussion and awareness building;
- Describing and predicting through the detection of “new correlations and the surfac[ing] of new questions;
- Facilitating information exchange through “multiple feedback loops which feed into both research and action,” and
- Promoting accountability and transparency, especially as a byproduct of crowdsourcing efforts aimed at “aggregat[ing] and analyz[ing] information in real time.
- The authors argue that in order to maximize the potential of big data’s use in development, “there is a case to be made for building a data commons for private/public data, and for setting up new and more appropriate ethical guidelines.”
- They also identify a number of challenges, especially when leveraging data made accessible from a number of sources, including private sector entities, such as:
- Lack of general data literacy;
- Lack of open learning environments and repositories;
- Lack of resources, capacity and access;
- Challenges of sensitivity and risk perception with regard to using data;
- Storage and computing capacity; and
- Externally validating data sources for comparison and verification.
Ansell, C. and Gash, A. “Collaborative Governance in Theory and Practice.” Journal of Public Administration Research and Theory 18 (4), 2008. http://bit.ly/1RZgsI5.
- This article describes collaborative arrangements that include public and private organizations working together and proposes a model for understanding an emergent form of public-private interaction informed by 137 diverse cases of collaborative governance.
- The article suggests factors significant to successful partnering processes and outcomes include:
- Shared understanding of challenges,
- Trust building processes,
- The importance of recognizing seemingly modest progress, and
- Strong indicators of commitment to the partnership’s aspirations and process.
- The authors provide a ‘’contingency theory model’’ that specifies relationships between different variables that influence outcomes of collaborative governance initiatives. Three “core contingencies’’ for successful collaborative governance initiatives identified by the authors are:
- Time (e.g., decision making time afforded to the collaboration);
- Interdependence (e.g., a high degree of interdependence can mitigate negative effects of low trust); and
- Trust (e.g. a higher level of trust indicates a higher probability of success).
Ballivian A, Hoffman W. “Public-Private Partnerships for Data: Issues Paper for Data Revolution Consultation.” World Bank, 2015. Available from: http://bit.ly/1ENvmRJ
- This World Bank report provides a background document on forming public-prviate partnerships for data with the private sector in order to inform the UN’s Independent Expert Advisory Group (IEAG) on sustaining a “data revolution” in sustainable development.
- The report highlights the critical position of private companies within the data value chain and reflects on key elements of a sustainable data PPP: “common objectives across all impacted stakeholders, alignment of incentives, and sharing of risks.” In addition, the report describes the risks and incentives of public and private actors, and the principles needed to “build[ing] the legal, cultural, technological and economic infrastructures to enable the balancing of competing interests.” These principles include understanding; experimentation; adaptability; balance; persuasion and compulsion; risk management; and governance.
- Examples of data collaboratives cited in the report include HP Earth Insights, Orange Data for Development Challenges, Amazon Web Services, IBM Smart Cities Initiative, and the Governance Lab’s Open Data 500.
Brack, Matthew, and Tito Castillo. “Data Sharing for Public Health: Key Lessons from Other Sectors.” Chatham House, Centre on Global Health Security. April 2015. Available from: http://bit.ly/1DHFGVl
- The Chatham House report provides an overview on public health surveillance data sharing, highlighting the benefits and challenges of shared health data and the complexity in adapting technical solutions from other sectors for public health.
- The report describes data sharing processes from several perspectives, including in-depth case studies of actual data sharing in practice at the individual, organizational and sector levels. Among the key lessons for public health data sharing, the report strongly highlights the need to harness momentum for action and maintain collaborative engagement: “Successful data sharing communities are highly collaborative. Collaboration holds the key to producing and abiding by community standards, and building and maintaining productive networks, and is by definition the essence of data sharing itself. Time should be invested in establishing and sustaining collaboration with all stakeholders concerned with public health surveillance data sharing.”
- Examples of data collaboratives include H3Africa (a collaboration between NIH and Wellcome Trust) and NHS England’s care.data programme.
de Montjoye, Yves-Alexandre, Jake Kendall, and Cameron F. Kerry. “Enabling Humanitarian Use of Mobile Phone Data.” The Brookings Institution, Issues in Technology Innovation. November 2014. Available from: http://brook.gs/1JxVpxp
- Using Ebola as a case study, the authors describe the value of using private telecom data for uncovering “valuable insights into understanding the spread of infectious diseases as well as strategies into micro-target outreach and driving update of health-seeking behavior.”
- The authors highlight the absence of a common legal and standards framework for “sharing mobile phone data in privacy-conscientious ways” and recommend “engaging companies, NGOs, researchers, privacy experts, and governments to agree on a set of best practices for new privacy-conscientious metadata sharing models.”
Eckartz, Silja M., Hofman, Wout J., Van Veenstra, Anne Fleur. “A decision model for data sharing.” Vol. 8653 LNCS. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2014. http://bit.ly/21cGWfw.
- This paper proposes a decision model for data sharing of public and private data based on literature review and three case studies in the logistics sector.
- The authors identify five categories of the barriers to data sharing and offer a decision model for identifying potential interventions to overcome each barrier:
- Ownership. Possible interventions likely require improving trust among those who own the data through, for example, involvement and support from higher management
- Privacy. Interventions include “anonymization by filtering of sensitive information and aggregation of data,” and access control mechanisms built around identity management and regulated access.
- Economic. Interventions include a model where data is shared only with a few trusted organizations, and yield management mechanisms to ensure negative financial consequences are avoided.
- Data quality. Interventions include identifying additional data sources that could improve the completeness of datasets, and efforts to improve metadata.
- Technical. Interventions include making data available in structured formats and publishing data according to widely agreed upon data standards.
Hoffman, Sharona and Podgurski, Andy. “The Use and Misuse of Biomedical Data: Is Bigger Really Better?” American Journal of Law & Medicine 497, 2013. http://bit.ly/1syMS7J.
- This journal articles explores the benefits and, in particular, the risks related to large-scale biomedical databases bringing together health information from a diversity of sources across sectors. Some data collaboratives examined in the piece include:
- MedMining – a company that extracts EHR data, de-identifies it, and offers it to researchers. The data sets that MedMining delivers to its customers include ‘lab results, vital signs, medications, procedures, diagnoses, lifestyle data, and detailed costs’ from inpatient and outpatient facilities.
- Explorys has formed a large healthcare database derived from financial, administrative, and medical records. It has partnered with major healthcare organizations such as the Cleveland Clinic Foundation and Summa Health System to aggregate and standardize health information from ten million patients and over thirty billion clinical events.
- Hoffman and Podgurski note that biomedical databases populated have many potential uses, with those likely to benefit including: “researchers, regulators, public health officials, commercial entities, lawyers,” as well as “healthcare providers who conduct quality assessment and improvement activities,” regulatory monitoring entities like the FDA, and “litigants in tort cases to develop evidence concerning causation and harm.”
- They argue, however, that risks arise based on:
- The data contained in biomedical databases is surprisingly likely to be incorrect or incomplete;
- Systemic biases, arising from both the nature of the data and the preconceptions of investigators are serious threats the validity of research results, especially in answering causal questions;
- Data mining of biomedical databases makes it easier for individuals with political, social, or economic agendas to generate ostensibly scientific but misleading research findings for the purpose of manipulating public opinion and swaying policymakers.
Krumholz, Harlan M., et al. “Sea Change in Open Science and Data Sharing Leadership by Industry.” Circulation: Cardiovascular Quality and Outcomes 7.4. 2014. 499-504. http://1.usa.gov/1J6q7KJ
- This article provides a comprehensive overview of industry-led efforts and cross-sector collaborations in data sharing by pharmaceutical companies to inform clinical practice.
- The article details the types of data being shared and the early activities of GlaxoSmithKline (“in coordination with other companies such as Roche and ViiV”); Medtronic and the Yale University Open Data Access Project; and Janssen Pharmaceuticals (Johnson & Johnson). The article also describes the range of involvement in data sharing among pharmaceutical companies including Pfizer, Novartis, Bayer, AbbVie, Eli Llly, AstraZeneca, and Bristol-Myers Squibb.
Mann, Gideon. “Private Data and the Public Good.” Medium. May 17, 2016. http://bit.ly/1OgOY68.
-
- This Medium post from Gideon Mann, the Head of Data Science at Bloomberg, shares his prepared remarks given at a lecture at the City College of New York. Mann argues for the potential benefits of increasing access to private sector data, both to improve research and academic inquiry and also to help solve practical, real-world problems. He also describes a number of initiatives underway at Bloomberg along these lines.
- Mann argues that data generated at private companies “could enable amazing discoveries and research,” but is often inaccessible to those who could put it to those uses. Beyond research, he notes that corporate data could, for instance, benefit:
-
- Public health – including suicide prevention, addiction counseling and mental health monitoring.
- Legal and ethical questions – especially as they relate to “the role algorithms have in decisions about our lives,” such as credit checks and resume screening.
-
- Mann recognizes the privacy challenges inherent in private sector data sharing, but argues that it is a common misconception that the only two choices are “complete privacy or complete disclosure.” He believes that flexible frameworks for differential privacy could open up new opportunities for responsibly leveraging data collaboratives.
Pastor Escuredo, D., Morales-Guzmán, A. et al, “Flooding through the Lens of Mobile Phone Activity.” IEEE Global Humanitarian Technology Conference, GHTC 2014. Available from: http://bit.ly/1OzK2bK
- This report describes the impact of using mobile data in order to understand the impact of disasters and improve disaster management. The report was conducted in the Mexican state of Tabasco in 2009 as a multidisciplinary, multi-stakeholder consortium involving the UN World Food Programme (WFP), Telefonica Research, Technical University of Madrid (UPM), Digital Strategy Coordination Office of the President of Mexico, and UN Global Pulse.
- Telefonica Research, a division of the major Latin American telecommunications company, provided call detail records covering flood-affected areas for nine months. This data was combined with “remote sensing data (satellite images), rainfall data, census and civil protection data.” The results of the data demonstrated that “analysing mobile activity during floods could be used to potentially locate damaged areas, efficiently assess needs and allocate resources (for example, sending supplies to affected areas).”
- In addition to the results, the study highlighted “the value of a public-private partnership on using mobile data to accurately indicate flooding impacts in Tabasco, thus improving early warning and crisis management.”
* Perkmann, M. and Schildt, H. “Open data partnerships between firms and universities: The role of boundary organizations.” Research Policy, 44(5), 2015. http://bit.ly/25RRJ2c.
- This paper discusses the concept of a “boundary organization” in relation to industry-academic partnerships driven by data. Boundary organizations perform mediated revealing, allowing firms to disclose their research problems to a broad audience of innovators and simultaneously minimize the risk that this information would be adversely used by competitors.
- The authors identify two especially important challenges for private firms to enter open data or participate in data collaboratives with the academic research community that could be addressed through more involvement from boundary organizations:
- First is a challenge of maintaining competitive advantage. The authors note that, “the more a firm attempts to align the efforts in an open data research programme with its R&D priorities, the more it will have to reveal about the problems it is addressing within its proprietary R&D.”
- Second, involves the misalignment of incentives between the private and academic field. Perkmann and Schildt argue that, a firm seeking to build collaborations around its opened data “will have to provide suitable incentives that are aligned with academic scientists’ desire to be rewarded for their work within their respective communities.”
Robin, N., Klein, T., & Jütting, J. “Public-Private Partnerships for Statistics: Lessons Learned, Future Steps.” OECD. 2016. http://bit.ly/24FLYlD.
- This working paper acknowledges the growing body of work on how different types of data (e.g, telecom data, social media, sensors and geospatial data, etc.) can address data gaps relevant to National Statistical Offices (NSOs).
- Four models of public-private interaction for statistics are describe: in-house production of statistics by a data-provider for a national statistics office (NSO), transfer of data-sets to NSOs from private entities, transfer of data to a third party provider to manage the NSO and private entity data, and the outsourcing of NSO functions.
- The paper highlights challenges to public-private partnerships involving data (e.g., technical challenges, data confidentiality, risks, limited incentives for participation), suggests deliberate and highly structured approaches to public-private partnerships involving data require enforceable contracts, emphasizes the trade-off between data specificity and accessibility of such data, and the importance of pricing mechanisms that reflect the capacity and capability of national statistic offices.
- Case studies referenced in the paper include:
- A mobile network operator’s (MNO Telefonica) in house analysis of call detail records;
- A third-party data provider and steward of travel statistics (Positium);
- The Data for Development (D4D) challenge organized by MNO Orange; and
- Statistics Netherlands use of social media to predict consumer confidence.
Stuart, Elizabeth, Samman, Emma, Avis, William, Berliner, Tom. “The data revolution: finding the missing millions.” Overseas Development Institute, 2015. Available from: http://bit.ly/1bPKOjw
- The authors of this report highlight the need for good quality, relevant, accessible and timely data for governments to extend services into underrepresented communities and implement policies towards a sustainable “data revolution.”
- The solutions focused on this recent report from the Overseas Development Institute focus on capacity-building activities of national statistical offices (NSOs), alternative sources of data (including shared corporate data) to address gaps, and building strong data management systems.
Taylor, L., & Schroeder, R. “Is bigger better? The emergence of big data as a tool for international development policy.” GeoJournal, 80(4). 2015. 503-518. http://bit.ly/1RZgSy4.
- This journal article describes how privately held data – namely “digital traces” of consumer activity – “are becoming seen by policymakers and researchers as a potential solution to the lack of reliable statistical data on lower-income countries.
- They focus especially on three categories of data collaborative use cases:
- Mobile data as a predictive tool for issues such as human mobility and economic activity;
- Use of mobile data to inform humanitarian response to crises; and
- Use of born-digital web data as a tool for predicting economic trends, and the implications these have for LMICs.
- They note, however, that a number of challenges and drawbacks exist for these types of use cases, including:
- Access to private data sources often must be negotiated or bought, “which potentially means substituting negotiations with corporations for those with national statistical offices;”
- The meaning of such data is not always simple or stable, and local knowledge is needed to understand how people are using the technologies in question
- Bias in proprietary data can be hard to understand and quantify;
- Lack of privacy frameworks; and
- Power asymmetries, wherein “LMIC citizens are unwittingly placed in a panopticon staffed by international researchers, with no way out and no legal recourse.”
van Panhuis, Willem G., Proma Paul, Claudia Emerson, John Grefenstette, Richard Wilder, Abraham J. Herbst, David Heymann, and Donald S. Burke. “A systematic review of barriers to data sharing in public health.” BMC public health 14, no. 1 (2014): 1144. Available from: http://bit.ly/1JOBruO
- The authors of this report provide a “systematic literature of potential barriers to public health data sharing.” These twenty potential barriers are classified in six categories: “technical, motivational, economic, political, legal and ethical.” In this taxonomy, “the first three categories are deeply rooted in well-known challenges of health information systems for which structural solutions have yet to be found; the last three have solutions that lie in an international dialogue aimed at generating consensus on policies and instruments for data sharing.”
- The authors suggest the need for a “systematic framework of barriers to data sharing in public health” in order to accelerate access and use of data for public good.
Verhulst, Stefaan and Sangokoya, David. “Mapping the Next Frontier of Open Data: Corporate Data Sharing.” In: Gasser, Urs and Zittrain, Jonathan and Faris, Robert and Heacock Jones, Rebekah, “Internet Monitor 2014: Reflections on the Digital World: Platforms, Policy, Privacy, and Public Discourse (December 15, 2014).” Berkman Center Research Publication No. 2014-17. http://bit.ly/1GC12a2
- This essay describe a taxonomy of current corporate data sharing practices for public good: research partnerships; prizes and challenges; trusted intermediaries; application programming interfaces (APIs); intelligence products; and corporate data cooperatives or pooling.
- Examples of data collaboratives include: Yelp Dataset Challenge, the Digital Ecologies Research Partnerhsip, BBVA Innova Challenge, Telecom Italia’s Big Data Challenge, NIH’s Accelerating Medicines Partnership and the White House’s Climate Data Partnerships.
- The authors highlight important questions to consider towards a more comprehensive mapping of these activities.
Verhulst, Stefaan and Sangokoya, David, 2015. “Data Collaboratives: Exchanging Data to Improve People’s Lives.” Medium. Available from: http://bit.ly/1JOBDdy
- The essay refers to data collaboratives as a new form of collaboration involving participants from different sectors exchanging data to help solve public problems. These forms of collaborations can improve people’s lives through data-driven decision-making; information exchange and coordination; and shared standards and frameworks for multi-actor, multi-sector participation.
- The essay cites four activities that are critical to accelerating data collaboratives: documenting value and measuring impact; matching public demand and corporate supply of data in a trusted way; training and convening data providers and users; experimenting and scaling existing initiatives.
- Examples of data collaboratives include NIH’s Precision Medicine Initiative; the Mobile Data, Environmental Extremes and Population (MDEEP) Project; and Twitter-MIT’s Laboratory for Social Machines.
Verhulst, Stefaan, Susha, Iryna, Kostura, Alexander. “Data Collaboratives: matching Supply of (Corporate) Data to Solve Public Problems.” Medium. February 24, 2016. http://bit.ly/1ZEp2Sr.
- This piece articulates a set of key lessons learned during a session at the International Data Responsibility Conference focused on identifying emerging practices, opportunities and challenges confronting data collaboratives.
- The authors list a number of privately held data sources that could create positive public impacts if made more accessible in a collaborative manner, including:
- Data for early warning systems to help mitigate the effects of natural disasters;
- Data to help understand human behavior as it relates to nutrition and livelihoods in developing countries;
- Data to monitor compliance with weapons treaties;
- Data to more accurately measure progress related to the UN Sustainable Development Goals.
- To the end of identifying and expanding on emerging practice in the space, the authors describe a number of current data collaborative experiments, including:
- Trusted Intermediaries: Statistics Netherlands partnered with Vodafone to analyze mobile call data records in order to better understand mobility patterns and inform urban planning.
- Prizes and Challenges: Orange Telecom, which has been a leader in this type of Data Collaboration, provided several examples of the company’s initiatives, such as the use of call data records to track the spread of malaria as well as their experience with Challenge 4 Development.
- Research partnerships: The Data for Climate Action project is an ongoing large-scale initiative incentivizing companies to share their data to help researchers answer particular scientific questions related to climate change and adaptation.
- Sharing intelligence products: JPMorgan Chase shares macro economic insights they gained leveraging their data through the newly established JPMorgan Chase Institute.
- In order to capitalize on the opportunities provided by data collaboratives, a number of needs were identified:
- A responsible data framework;
- Increased insight into different business models that may facilitate the sharing of data;
- Capacity to tap into the potential value of data;
- Transparent stock of available data supply; and
- Mapping emerging practices and models of sharing.
Vogel, N., Theisen, C., Leidig, J. P., Scripps, J., Graham, D. H., & Wolffe, G. “Mining mobile datasets to enable the fine-grained stochastic simulation of Ebola diffusion.” Paper presented at the Procedia Computer Science. 2015. http://bit.ly/1TZDroF.
- The paper presents a research study conducted on the basis of the mobile calls records shared with researchers in the framework of the Data for Development Challenge by the mobile operator Orange.
- The study discusses the data analysis approach in relation to developing a situation of Ebola diffusion built around “the interactions of multi-scale models, including viral loads (at the cellular level), disease progression (at the individual person level), disease propagation (at the workplace and family level), societal changes in migration and travel movements (at the population level), and mitigating interventions (at the abstract government policy level).”
- The authors argue that the use of their population, mobility, and simulation models provide more accurate simulation details in comparison to high-level analytical predictions and that the D4D mobile datasets provide high-resolution information useful for modeling developing regions and hard to reach locations.
Welle Donker, F., van Loenen, B., & Bregt, A. K. “Open Data and Beyond.” ISPRS International Journal of Geo-Information, 5(4). 2016. http://bit.ly/22YtugY.
- This research has developed a monitoring framework to assess the effects of open (private) data using a case study of a Dutch energy network administrator Liander.
- Focusing on the potential impacts of open private energy data – beyond ‘smart disclosure’ where citizens are given information only about their own energy usage – the authors identify three attainable strategic goals:
- Continuously optimize performance on services, security of supply, and costs;
- Improve management of energy flows and insight into energy consumption;
- Help customers save energy and switch over to renewable energy sources.
- The authors propose a seven-step framework for assessing the impacts of Liander data, in particular, and open private data more generally:
- Develop a performance framework to describe what the program is about, description of the organization’s mission and strategic goals;
- Identify the most important elements, or key performance areas which are most critical to understanding and assessing your program’s success;
- Select the most appropriate performance measures;
- Determine the gaps between what information you need and what is available;
- Develop and implement a measurement strategy to address the gaps;
- Develop a performance report which highlights what you have accomplished and what you have learned;
- Learn from your experiences and refine your approach as required.
- While the authors note that the true impacts of this open private data will likely not come into view in the short term, they argue that, “Liander has successfully demonstrated that private energy companies can release open data, and has successfully championed the other Dutch network administrators to follow suit.”
World Economic Forum, 2015. “Data-driven development: pathways for progress.” Geneva: World Economic Forum. http://bit.ly/1JOBS8u
- This report captures an overview of the existing data deficit and the value and impact of big data for sustainable development.
- The authors of the report focus on four main priorities towards a sustainable data revolution: commercial incentives and trusted agreements with public- and private-sector actors; the development of shared policy frameworks, legal protections and impact assessments; capacity building activities at the institutional, community, local and individual level; and lastly, recognizing individuals as both produces and consumers of data.
Searching for Someone: From the “Small World Experiment” to the “Red Balloon Challenge,” and beyond
Essay by Manuel Cebrian, Iyad Rahwan, Victoriano Izquierdo, Alex Rutherford, Esteban Moro and Alex (Sandy) Pentland: “Our ability to search social networks for people and information is fundamental to our success. We use our personal connections to look for new job opportunities, to seek advice about what products to buy, to match with romantic partners, to find a good physician, to identify business partners, and so on.
Despite living in a world populated by seven billion people, we are able to navigate our contacts efficiently, only needing a handful of personal introductions before finding the answer to our question, or the person we are seeking. How does this come to be? In folk culture, the answer to this question is that we live in a “small world.” The catch-phrase was coined in 1929 by the visionary author Frigyes Karinthy in his Chain-Links essay, where these ideas are put forward for the first time.
Let me put it this way: Planet Earth has never been as tiny as it is now. It shrunk — relatively speaking of course — due to the quickening pulse of both physical and verbal communication. We never talked about the fact that anyone on Earth, at my or anyone’s will, can now learn in just a few minutes what I think or do, and what I want or what I would like to do. Now we live in fairyland. The only slightly disappointing thing about this land is that it is smaller than the real world has ever been. — Frigyes Karinthy, Chain-Links, 1929
Then, it was just a dystopian idea reflecting the anxiety of living in an increasingly more connected world. But there was no empirical evidence that this was actually the case, and it took almost 30 years to find any.
Six Degrees of Separation
In 1967, legendary psychologist Stanley Milgram conducted a ground-breaking experiment to test this “small world” hypothesis. He started with random individuals in the U.S. midwest, and asked them to send packages to people in Boston, Massachusetts, whose address was not given. They must contribute to this “search” only by sending the package to individuals known on a first-name basis. Milgram expected that successful searches (if any!) would require hundreds of individuals along the chain from the initial sender to the final recipient.
Surprisingly, however, Milgram found that the average path length was somewhere between five point five and six individuals, which made social search look astonishingly efficient. Although the experiment raised some methodological criticisms, its findings were profound. However, what it did not answer is why social networks have such short paths in the first place. The answer was not obvious. In fact, there were reasons to suspect that short paths were just a myth: social networks are very cliquish. Your friends’ friends are likely to also be your friends, and thus most social paths are short and circular. This “cliquishness” suggests that our search through the social network can easily get “trapped” within our close social community, making social search highly inefficient.
Architectures for Social Search
Again, it took a long time — more than 40 years — before this riddle was solved. In a 1998 seminal paper in Nature, Duncan Watts & Steven Strogatzcame up with an elegant mathematical model to explain the existence of these short paths. They started from a social network that is very cliquish, i.e., most of your friends are also friends of one another. In this model, the world is “large” since the social distance among individuals is very long. However, if we take only a tiny fraction of these connections (say one out of every hundred links), and rewire them to random individuals in the network, that same world suddenly becomes “small.” These random connections allow individuals to jump to faraway communities very quickly — using them as social network highways — thus reducing average path length in a dramatic fashion.
While this theoretical insight suggests that social networks are searchable due to the existence of short paths, it does not yet say much about the “procedure” that people use to find these paths. There is no reason, a priori, that we should know how to find these short chains, especially since there are many chains, and no individuals have knowledge of the network structure beyond their immediate communities. People do not know how the friends of their friends are connected among themselves, and therefore it is not obvious that they would have a good way of navigating their social network while searching.
Soon after Watts and Strogatz came up with this model at Cornell University, a computer scientist across campus, Jon Kleinberg, set out to investigate whether such “small world” networks are searchable. In a landmark Nature article, “Navigation in a Small World,” published in 200o, he showed that social search is easy without global knowledge of the network, but only for a very specific value of the probability of long-range connectivity (i.e., the probability that we know somebody far removed from us, socially, in the social network). With the advent of a publicly available social media dataset such as LiveJournal, David Liben-Nowell and colleagues showed that real-world social networks do indeed have these particular long-range ties. It appears the social architecture of the world we inhabit is remarkably fine-tuned for searchability….
The Tragedy of the Crowdsourcers
Some recent efforts have been made to try and disincentivize sabotage. If verification is also rewarded along the recruitment tree, then the individuals who recruited the saboteurs would have a clear incentive to verify, halt, and punish the saboteurs. This theoretical solution is yet to be tested in practice, and it is conjectured that a coalition of saboteurs, where saboteurs recruit other saboteurs pretending to “vet” them, would make recursive verification futile.
If we are to believe in theory, theory does not shed a promising light on reducing sabotage in social search. We recently proposed the “Crowdsourcing Dilemma.” In it, we perform a game-theoretic analysis of the fundamental tradeoff between the potential for increased productivity of social search and the possibility of being set back by malicious behavior, including misinformation. Our results show that, in competitive scenarios, such as those with multiple social searches competing for the same information, malicious behavior is the norm, not an anomaly — a result contrary to conventional wisdom. Even worse: counterintuitively, making sabotage more costly does not deter saboteurs, but leads all the competing teams to a less desirable outcome, with more aggression, and less efficient collective search for talent.
These empirical and theoretical findings have cautionary implications for the future of social search, and crowdsourcing in general. Social search is surprisingly efficient, cheap, easy to implement, and functional across multiple applications. But there are also surprises in the amount of evildoing that the social searchers will stumble upon while recruiting. As we get deeper and deeper into the recruitment tree, we stumble upon that evil force lurking in the dark side of the network.
Evil mutates and regenerates in the crowd in new forms impossible to anticipate by the designers or participants themselves. Crowdsourcing and its enemies will always be engaged in an co-evolutionary arms race.
Talent is there to be searched and recruited. But so are evil and malice. Ultimately, crowdsourcing experts need to figure out how to recruit more of the former, while deterring more of the later. We might be living on a small world, but the cost and fragility of navigating it could harm any potential strategy to leverage the power of social networks….
Being searchable is a way of being closely connected to everyone else, which is conducive to contagion, group-think, and, most crucially, makes it hard for individuals to differentiate from each other. Evolutionarily, for better or worse, our brain makes us mimic others, and whether this copying of others ends up being part of the Wisdom of the Crowds, or the “stupidity of many,” it is highly sensitive to the scenario at hand.
Katabasis, or the myth of the hero that descends to the underworld and comes back stronger, is as old as time and pervasive across ancient cultures. Creative people seem to need to “get lost.” Grigori Perelman, Shinichi Mochizuki, and Bob Dylan all disappeared for a few years to reemerge later as more creative versions of themselves. Others like J. D. Salinger and Bobby Fisher also vanished, and never came back to the public sphere. If others cannot search and find us, we gain some slack, some room to escape from what we are known for by others. Searching for our true creative selves may rest on the difficulty of others finding us….(More)”
Do Open Comment Processes Increase Regulatory Compliance? Evidence from a Public Goods Experiment
Stephen N. Morgan, Nicole M. Mason and Robert S. Shupp at EconPapers: “Agri-environmental programs often incorporate stakeholder participation elements in an effort to increase community ownership of policies designed to protect environmental resources (Hajer 1995; Fischer 2000). Participation – acting through increased levels of ownership – is then expected to increase individual rates of compliance with regulatory policies. Utilizing a novel lab experiment, this research leverages a public goods contribution game to test the effects of a specific type of stakeholder participation scheme on individual compliance outcomes. We find significant evidence that the implemented type of non-voting participation mechanism reduces the probability that an individual will engage in noncompliant behavior and reduces the level of noncompliance. At the same time, exposure to the open comment treatment also increases individual contributions to a public good. Additionally, we find evidence that exposure to participation schemes results in a faster decay in individual compliance over time suggesting that the impacts of this type of participation mechanism may be transitory….(More)”
La Primaire Wants To Help French Voters Bypass Traditional Parties
Federico Guerrini in Forbes: “French people, like the citizens of many other countries, have little confidence in their government or in their members of parliament.
A recent study by the Center for Political Research of the University of Science-Po(CEVIPOF) in Paris, shows that while residents still trust, in part, their local officials, only 37% of them on average feel the same for those belonging to theNational Assembly, the Senate or the executive.
Three years before, when asked in another poll about of what sprung to mind first when thinking of politics, their first answer was “disgust”.
With this sort of background, it is perhaps unsurprising that a number of activists have decided to try and find new ways to boost political participation, using crowdsourcing, smartphone applications and online platforms to look for candidates outside of the usual circles.
There are several civic tech initiatives in place in France right now. One of the most fascinating is called LaPrimaire.org.
It’s an online platform whose main aim is to organize an open primary election,select a suitable candidate, and allow him to run for President in the 2017elections.
Launched in April by Thibauld Favre and David Guez, an engineer and a lawyer by trade, both with no connection to the political establishment, it has attracted so far 164 self-proposed candidates and some 26,000 voters. Anyone can be elected, as long as they live in France, do not belong to any political party and have a clean criminal record.
A different class of possible candidates, also present on the website, is composed by the so-called “citoyens plébiscités”, VIPs, politician or celebrities that backers of LaPrimaire.org think should run for president. In both cases, in order to qualify for the next phase of the selection, these people have to secure the vote of at least 500 supporters by July 14….(More)”
WeatherUSI: User-Based Weather Crowdsourcing on Public Displays
Evangelos Niforatos, Ivan Elhart and Marc Langheinrich in Web Engineering: “Contemporary public display systems hold a significant potential to contribute to in situ crowdsourcing. Recently, public display systems have surpassed their traditional role as static content projection hotspots by supporting interactivity and hosting applications that increase overall perceived user utility. As such, we developed WeatherUSI, a web-based interactive public display application that enables passers-by to input subjective information about current and future weather conditions. In this demo paper, we present the functionality of the app, describe the underlying system infrastructure and present how we combine input streams originating from WeatherUSI app on a public display together with its mobile app counterparts for facilitating user based weather crowdsourcing….(more)”
Teenage scientists enlisted to fight Zika
ShareAmerica: “A mosquito’s a mosquito, right? Not when it comes to Zika and other mosquito-borne diseases.
Only two of the estimated 3,000 species of mosquitoes are capable of carrying the Zika virus in the United States, but estimates of their precise range remain hazy, according to the U.S. Centers for Disease Control and Prevention.
Scientists could start getting better information about these pesky, but important, insects with the help of plastic cups, brown paper towels and teenage biology students.
As part of the Invasive Mosquito Project from the U.S. Department of Agriculture, secondary-school students nationwide are learning about mosquito populations and helping fill the knowledge gaps.
Simple experiment, complex problem
The experiment works like this: First, students line the cups with paper, then fill two-thirds of the cups with water. Students place the plastic cups outside, and after a week, the paper is dotted with what looks like specks of dirt. These dirt particles are actually mosquito eggs, which the students can identify and classify.
Students then upload their findings to a national crowdsourced database. Crowdsourcing uses the collective intelligence of online communities to “distribute” problem solving across a massive network.
Entomologist Lee Cohnstaedt of the U.S. Department of Agriculture coordinates the program, and he’s already thinking about expansion. He said he hopes to have one-fifth of U.S. schools participate in the mosquito species census. He also plans to adapt lesson plans for middle schools, Scouting troops and gardening clubs.
Already, crowdsourcing has “collected better data than we could have working alone,” he told the Associated Press….
In addition to mosquito tracking, crowdsourcing has been used to develop innovative responses to a number of complex challenges, from climate change to archaeologyto protein modeling….(More)”
The Small World Initiative: An Innovative Crowdsourcing Platform for Antibiotics
Ana Maria Barral et al in FASEB Journal: “The Small World Initiative™ (SWI) is an innovative program that encourages students to pursue careers in science and sets forth a unique platform to crowdsource new antibiotics. It centers around an introductory biology course through which students perform original hands-on field and laboratory research in the hunt for new antibiotics. Through a series of student-driven experiments, students collect soil samples, isolate diverse bacteria, test their bacteria against clinically-relevant microorganisms, and characterize those showing inhibitory activity. This is particularly relevant since over two thirds of antibiotics originate from soil bacteria or fungi. SWI’s approach also provides a platform to crowdsource antibiotic discovery by tapping into the intellectual power of many people concurrently addressing a global challenge and advances promising candidates into the drug development pipeline. This unique class approach harnesses the power of active learning to achieve both educational and scientific goals…..We will discuss our preliminary student evaluation results, which show the compelling impact of the program in comparison to traditional introductory courses. Ultimately, the mission of the program is to provide an evidence-based approach to teaching introductory biology concepts in the context of a real-world problem. This approach has been shown to be particularly impactful on underrepresented STEM talent pools, including women and minorities….(More)”