Selected Readings on Data Collaboratives


By Neil Britto, David Sangokoya, Iryna Susha, Stefaan Verhulst and Andrew Young

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data collaboratives was originally published in 2017.

The term data collaborative refers to a new form of collaboration, beyond the public-private partnership model, in which participants from different sectors (including private companies, research institutions, and government agencies ) can exchange data to help solve public problems. Several of society’s greatest challenges — from addressing climate change to public health to job creation to improving the lives of children — require greater access to data, more collaboration between public – and private-sector entities, and an increased ability to analyze datasets. In the coming months and years, data collaboratives will be essential vehicles for harnessing the vast stores of privately held data toward the public good.

Selected Reading List (in alphabetical order)

Annotated Selected Readings List (in alphabetical order)

Agaba, G., Akindès, F., Bengtsson, L., Cowls, J., Ganesh, M., Hoffman, N., . . . Meissner, F. “Big Data and Positive Social Change in the Developing World: A White Paper for Practitioners and Researchers.” 2014. http://bit.ly/25RRC6N.

  • This white paper, produced by “a group of activists, researchers and data experts” explores the potential of big data to improve development outcomes and spur positive social change in low- and middle-income countries. Using examples, the authors discuss four areas in which the use of big data can impact development efforts:
    • Advocating and facilitating by “opening[ing] up new public spaces for discussion and awareness building;
    • Describing and predicting through the detection of “new correlations and the surfac[ing] of new questions;
    • Facilitating information exchange through “multiple feedback loops which feed into both research and action,” and
    • Promoting accountability and transparency, especially as a byproduct of crowdsourcing efforts aimed at “aggregat[ing] and analyz[ing] information in real time.
  • The authors argue that in order to maximize the potential of big data’s use in development, “there is a case to be made for building a data commons for private/public data, and for setting up new and more appropriate ethical guidelines.”
  • They also identify a number of challenges, especially when leveraging data made accessible from a number of sources, including private sector entities, such as:
    • Lack of general data literacy;
    • Lack of open learning environments and repositories;
    • Lack of resources, capacity and access;
    • Challenges of sensitivity and risk perception with regard to using data;
    • Storage and computing capacity; and
    • Externally validating data sources for comparison and verification.

Ansell, C. and Gash, A. “Collaborative Governance in Theory and Practice.” Journal of Public Administration Research and  Theory 18 (4), 2008. http://bit.ly/1RZgsI5.

  • This article describes collaborative arrangements that include public and private organizations working together and proposes a model for understanding an emergent form of public-private interaction informed by 137 diverse cases of collaborative governance.
  • The article suggests factors significant to successful partnering processes and outcomes include:
    • Shared understanding of challenges,
    • Trust building processes,
    • The importance of recognizing seemingly modest progress, and
    • Strong indicators of commitment to the partnership’s aspirations and process.
  • The authors provide a ‘’contingency theory model’’ that specifies relationships between different variables that influence outcomes of collaborative governance initiatives. Three “core contingencies’’ for successful collaborative governance initiatives identified by the authors are:
    • Time (e.g., decision making time afforded to the collaboration);
    • Interdependence (e.g., a high degree of interdependence can mitigate negative effects of low trust); and
    • Trust (e.g. a higher level of trust indicates a higher probability of success).

Ballivian A, Hoffman W. “Public-Private Partnerships for Data: Issues Paper for Data Revolution Consultation.” World Bank, 2015. Available from: http://bit.ly/1ENvmRJ

  • This World Bank report provides a background document on forming public-prviate partnerships for data with the private sector in order to inform the UN’s Independent Expert Advisory Group (IEAG) on sustaining a “data revolution” in sustainable development.
  • The report highlights the critical position of private companies within the data value chain and reflects on key elements of a sustainable data PPP: “common objectives across all impacted stakeholders, alignment of incentives, and sharing of risks.” In addition, the report describes the risks and incentives of public and private actors, and the principles needed to “build[ing] the legal, cultural, technological and economic infrastructures to enable the balancing of competing interests.” These principles include understanding; experimentation; adaptability; balance; persuasion and compulsion; risk management; and governance.
  • Examples of data collaboratives cited in the report include HP Earth Insights, Orange Data for Development Challenges, Amazon Web Services, IBM Smart Cities Initiative, and the Governance Lab’s Open Data 500.

Brack, Matthew, and Tito Castillo. “Data Sharing for Public Health: Key Lessons from Other Sectors.” Chatham House, Centre on Global Health Security. April 2015. Available from: http://bit.ly/1DHFGVl

  • The Chatham House report provides an overview on public health surveillance data sharing, highlighting the benefits and challenges of shared health data and the complexity in adapting technical solutions from other sectors for public health.
  • The report describes data sharing processes from several perspectives, including in-depth case studies of actual data sharing in practice at the individual, organizational and sector levels. Among the key lessons for public health data sharing, the report strongly highlights the need to harness momentum for action and maintain collaborative engagement: “Successful data sharing communities are highly collaborative. Collaboration holds the key to producing and abiding by community standards, and building and maintaining productive networks, and is by definition the essence of data sharing itself. Time should be invested in establishing and sustaining collaboration with all stakeholders concerned with public health surveillance data sharing.”
  • Examples of data collaboratives include H3Africa (a collaboration between NIH and Wellcome Trust) and NHS England’s care.data programme.

de Montjoye, Yves-Alexandre, Jake Kendall, and Cameron F. Kerry. “Enabling Humanitarian Use of Mobile Phone Data.” The Brookings Institution, Issues in Technology Innovation. November 2014. Available from: http://brook.gs/1JxVpxp

  • Using Ebola as a case study, the authors describe the value of using private telecom data for uncovering “valuable insights into understanding the spread of infectious diseases as well as strategies into micro-target outreach and driving update of health-seeking behavior.”
  • The authors highlight the absence of a common legal and standards framework for “sharing mobile phone data in privacy-conscientious ways” and recommend “engaging companies, NGOs, researchers, privacy experts, and governments to agree on a set of best practices for new privacy-conscientious metadata sharing models.”

Eckartz, Silja M., Hofman, Wout J., Van Veenstra, Anne Fleur. “A decision model for data sharing.” Vol. 8653 LNCS. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2014. http://bit.ly/21cGWfw.

  • This paper proposes a decision model for data sharing of public and private data based on literature review and three case studies in the logistics sector.
  • The authors identify five categories of the barriers to data sharing and offer a decision model for identifying potential interventions to overcome each barrier:
    • Ownership. Possible interventions likely require improving trust among those who own the data through, for example, involvement and support from higher management
    • Privacy. Interventions include “anonymization by filtering of sensitive information and aggregation of data,” and access control mechanisms built around identity management and regulated access.  
    • Economic. Interventions include a model where data is shared only with a few trusted organizations, and yield management mechanisms to ensure negative financial consequences are avoided.
    • Data quality. Interventions include identifying additional data sources that could improve the completeness of datasets, and efforts to improve metadata.
    • Technical. Interventions include making data available in structured formats and publishing data according to widely agreed upon data standards.

Hoffman, Sharona and Podgurski, Andy. “The Use and Misuse of Biomedical Data: Is Bigger Really Better?” American Journal of Law & Medicine 497, 2013. http://bit.ly/1syMS7J.

  • This journal articles explores the benefits and, in particular, the risks related to large-scale biomedical databases bringing together health information from a diversity of sources across sectors. Some data collaboratives examined in the piece include:
    • MedMining – a company that extracts EHR data, de-identifies it, and offers it to researchers. The data sets that MedMining delivers to its customers include ‘lab results, vital signs, medications, procedures, diagnoses, lifestyle data, and detailed costs’ from inpatient and outpatient facilities.
    • Explorys has formed a large healthcare database derived from financial, administrative, and medical records. It has partnered with major healthcare organizations such as the Cleveland Clinic Foundation and Summa Health System to aggregate and standardize health information from ten million patients and over thirty billion clinical events.
  • Hoffman and Podgurski note that biomedical databases populated have many potential uses, with those likely to benefit including: “researchers, regulators, public health officials, commercial entities, lawyers,” as well as “healthcare providers who conduct quality assessment and improvement activities,” regulatory monitoring entities like the FDA, and “litigants in tort cases to develop evidence concerning causation and harm.”
  • They argue, however, that risks arise based on:
    • The data contained in biomedical databases is surprisingly likely to be incorrect or incomplete;
    • Systemic biases, arising from both the nature of the data and the preconceptions of investigators are serious threats the validity of research results, especially in answering causal questions;
  • Data mining of biomedical databases makes it easier for individuals with political, social, or economic agendas to generate ostensibly scientific but misleading research findings for the purpose of manipulating public opinion and swaying policymakers.

Krumholz, Harlan M., et al. “Sea Change in Open Science and Data Sharing Leadership by Industry.” Circulation: Cardiovascular Quality and Outcomes 7.4. 2014. 499-504. http://1.usa.gov/1J6q7KJ

  • This article provides a comprehensive overview of industry-led efforts and cross-sector collaborations in data sharing by pharmaceutical companies to inform clinical practice.
  • The article details the types of data being shared and the early activities of GlaxoSmithKline (“in coordination with other companies such as Roche and ViiV”); Medtronic and the Yale University Open Data Access Project; and Janssen Pharmaceuticals (Johnson & Johnson). The article also describes the range of involvement in data sharing among pharmaceutical companies including Pfizer, Novartis, Bayer, AbbVie, Eli Llly, AstraZeneca, and Bristol-Myers Squibb.

Mann, Gideon. “Private Data and the Public Good.” Medium. May 17, 2016. http://bit.ly/1OgOY68.

    • This Medium post from Gideon Mann, the Head of Data Science at Bloomberg, shares his prepared remarks given at a lecture at the City College of New York. Mann argues for the potential benefits of increasing access to private sector data, both to improve research and academic inquiry and also to help solve practical, real-world problems. He also describes a number of initiatives underway at Bloomberg along these lines.    
  • Mann argues that data generated at private companies “could enable amazing discoveries and research,” but is often inaccessible to those who could put it to those uses. Beyond research, he notes that corporate data could, for instance, benefit:
      • Public health – including suicide prevention, addiction counseling and mental health monitoring.
    • Legal and ethical questions – especially as they relate to “the role algorithms have in decisions about our lives,” such as credit checks and resume screening.
  • Mann recognizes the privacy challenges inherent in private sector data sharing, but argues that it is a common misconception that the only two choices are “complete privacy or complete disclosure.” He believes that flexible frameworks for differential privacy could open up new opportunities for responsibly leveraging data collaboratives.

Pastor Escuredo, D., Morales-Guzmán, A. et al, “Flooding through the Lens of Mobile Phone Activity.” IEEE Global Humanitarian Technology Conference, GHTC 2014. Available from: http://bit.ly/1OzK2bK

  • This report describes the impact of using mobile data in order to understand the impact of disasters and improve disaster management. The report was conducted in the Mexican state of Tabasco in 2009 as a multidisciplinary, multi-stakeholder consortium involving the UN World Food Programme (WFP), Telefonica Research, Technical University of Madrid (UPM), Digital Strategy Coordination Office of the President of Mexico, and UN Global Pulse.
  • Telefonica Research, a division of the major Latin American telecommunications company, provided call detail records covering flood-affected areas for nine months. This data was combined with “remote sensing data (satellite images), rainfall data, census and civil protection data.” The results of the data demonstrated that “analysing mobile activity during floods could be used to potentially locate damaged areas, efficiently assess needs and allocate resources (for example, sending supplies to affected areas).”
  • In addition to the results, the study highlighted “the value of a public-private partnership on using mobile data to accurately indicate flooding impacts in Tabasco, thus improving early warning and crisis management.”

* Perkmann, M. and Schildt, H. “Open data partnerships between firms and universities: The role of boundary organizations.” Research Policy, 44(5), 2015. http://bit.ly/25RRJ2c

  • This paper discusses the concept of a “boundary organization” in relation to industry-academic partnerships driven by data. Boundary organizations perform mediated revealing, allowing firms to disclose their research problems to a broad audience of innovators and simultaneously minimize the risk that this information would be adversely used by competitors.
  • The authors identify two especially important challenges for private firms to enter open data or participate in data collaboratives with the academic research community that could be addressed through more involvement from boundary organizations:
    • First is a challenge of maintaining competitive advantage. The authors note that, “the more a firm attempts to align the efforts in an open data research programme with its R&D priorities, the more it will have to reveal about the problems it is addressing within its proprietary R&D.”
    • Second, involves the misalignment of incentives between the private and academic field. Perkmann and Schildt argue that, a firm seeking to build collaborations around its opened data “will have to provide suitable incentives that are aligned with academic scientists’ desire to be rewarded for their work within their respective communities.”

Robin, N., Klein, T., & Jütting, J. “Public-Private Partnerships for Statistics: Lessons Learned, Future Steps.” OECD. 2016. http://bit.ly/24FLYlD.

  • This working paper acknowledges the growing body of work on how different types of data (e.g, telecom data, social media, sensors and geospatial data, etc.) can address data gaps relevant to National Statistical Offices (NSOs).
  • Four models of public-private interaction for statistics are describe: in-house production of statistics by a data-provider for a national statistics office (NSO), transfer of data-sets to NSOs from private entities, transfer of data to a third party provider to manage the NSO and private entity data, and the outsourcing of NSO functions.
  • The paper highlights challenges to public-private partnerships involving data (e.g., technical challenges, data confidentiality, risks, limited incentives for participation), suggests deliberate and highly structured approaches to public-private partnerships involving data require enforceable contracts, emphasizes the trade-off between data specificity and accessibility of such data, and the importance of pricing mechanisms that reflect the capacity and capability of national statistic offices.
  • Case studies referenced in the paper include:
    • A mobile network operator’s (MNO Telefonica) in house analysis of call detail records;
    • A third-party data provider and steward of travel statistics (Positium);
    • The Data for Development (D4D) challenge organized by MNO Orange; and
    • Statistics Netherlands use of social media to predict consumer confidence.

Stuart, Elizabeth, Samman, Emma, Avis, William, Berliner, Tom. “The data revolution: finding the missing millions.” Overseas Development Institute, 2015. Available from: http://bit.ly/1bPKOjw

  • The authors of this report highlight the need for good quality, relevant, accessible and timely data for governments to extend services into underrepresented communities and implement policies towards a sustainable “data revolution.”
  • The solutions focused on this recent report from the Overseas Development Institute focus on capacity-building activities of national statistical offices (NSOs), alternative sources of data (including shared corporate data) to address gaps, and building strong data management systems.

Taylor, L., & Schroeder, R. “Is bigger better? The emergence of big data as a tool for international development policy.” GeoJournal, 80(4). 2015. 503-518. http://bit.ly/1RZgSy4.

  • This journal article describes how privately held data – namely “digital traces” of consumer activity – “are becoming seen by policymakers and researchers as a potential solution to the lack of reliable statistical data on lower-income countries.
  • They focus especially on three categories of data collaborative use cases:
    • Mobile data as a predictive tool for issues such as human mobility and economic activity;
    • Use of mobile data to inform humanitarian response to crises; and
    • Use of born-digital web data as a tool for predicting economic trends, and the implications these have for LMICs.
  • They note, however, that a number of challenges and drawbacks exist for these types of use cases, including:
    • Access to private data sources often must be negotiated or bought, “which potentially means substituting negotiations with corporations for those with national statistical offices;”
    • The meaning of such data is not always simple or stable, and local knowledge is needed to understand how people are using the technologies in question
    • Bias in proprietary data can be hard to understand and quantify;
    • Lack of privacy frameworks; and
    • Power asymmetries, wherein “LMIC citizens are unwittingly placed in a panopticon staffed by international researchers, with no way out and no legal recourse.”

van Panhuis, Willem G., Proma Paul, Claudia Emerson, John Grefenstette, Richard Wilder, Abraham J. Herbst, David Heymann, and Donald S. Burke. “A systematic review of barriers to data sharing in public health.” BMC public health 14, no. 1 (2014): 1144. Available from: http://bit.ly/1JOBruO

  • The authors of this report provide a “systematic literature of potential barriers to public health data sharing.” These twenty potential barriers are classified in six categories: “technical, motivational, economic, political, legal and ethical.” In this taxonomy, “the first three categories are deeply rooted in well-known challenges of health information systems for which structural solutions have yet to be found; the last three have solutions that lie in an international dialogue aimed at generating consensus on policies and instruments for data sharing.”
  • The authors suggest the need for a “systematic framework of barriers to data sharing in public health” in order to accelerate access and use of data for public good.

Verhulst, Stefaan and Sangokoya, David. “Mapping the Next Frontier of Open Data: Corporate Data Sharing.” In: Gasser, Urs and Zittrain, Jonathan and Faris, Robert and Heacock Jones, Rebekah, “Internet Monitor 2014: Reflections on the Digital World: Platforms, Policy, Privacy, and Public Discourse (December 15, 2014).” Berkman Center Research Publication No. 2014-17. http://bit.ly/1GC12a2

  • This essay describe a taxonomy of current corporate data sharing practices for public good: research partnerships; prizes and challenges; trusted intermediaries; application programming interfaces (APIs); intelligence products; and corporate data cooperatives or pooling.
  • Examples of data collaboratives include: Yelp Dataset Challenge, the Digital Ecologies Research Partnerhsip, BBVA Innova Challenge, Telecom Italia’s Big Data Challenge, NIH’s Accelerating Medicines Partnership and the White House’s Climate Data Partnerships.
  • The authors highlight important questions to consider towards a more comprehensive mapping of these activities.

Verhulst, Stefaan and Sangokoya, David, 2015. “Data Collaboratives: Exchanging Data to Improve People’s Lives.” Medium. Available from: http://bit.ly/1JOBDdy

  • The essay refers to data collaboratives as a new form of collaboration involving participants from different sectors exchanging data to help solve public problems. These forms of collaborations can improve people’s lives through data-driven decision-making; information exchange and coordination; and shared standards and frameworks for multi-actor, multi-sector participation.
  • The essay cites four activities that are critical to accelerating data collaboratives: documenting value and measuring impact; matching public demand and corporate supply of data in a trusted way; training and convening data providers and users; experimenting and scaling existing initiatives.
  • Examples of data collaboratives include NIH’s Precision Medicine Initiative; the Mobile Data, Environmental Extremes and Population (MDEEP) Project; and Twitter-MIT’s Laboratory for Social Machines.

Verhulst, Stefaan, Susha, Iryna, Kostura, Alexander. “Data Collaboratives: matching Supply of (Corporate) Data to Solve Public Problems.” Medium. February 24, 2016. http://bit.ly/1ZEp2Sr.

  • This piece articulates a set of key lessons learned during a session at the International Data Responsibility Conference focused on identifying emerging practices, opportunities and challenges confronting data collaboratives.
  • The authors list a number of privately held data sources that could create positive public impacts if made more accessible in a collaborative manner, including:
    • Data for early warning systems to help mitigate the effects of natural disasters;
    • Data to help understand human behavior as it relates to nutrition and livelihoods in developing countries;
    • Data to monitor compliance with weapons treaties;
    • Data to more accurately measure progress related to the UN Sustainable Development Goals.
  • To the end of identifying and expanding on emerging practice in the space, the authors describe a number of current data collaborative experiments, including:
    • Trusted Intermediaries: Statistics Netherlands partnered with Vodafone to analyze mobile call data records in order to better understand mobility patterns and inform urban planning.
    • Prizes and Challenges: Orange Telecom, which has been a leader in this type of Data Collaboration, provided several examples of the company’s initiatives, such as the use of call data records to track the spread of malaria as well as their experience with Challenge 4 Development.
    • Research partnerships: The Data for Climate Action project is an ongoing large-scale initiative incentivizing companies to share their data to help researchers answer particular scientific questions related to climate change and adaptation.
    • Sharing intelligence products: JPMorgan Chase shares macro economic insights they gained leveraging their data through the newly established JPMorgan Chase Institute.
  • In order to capitalize on the opportunities provided by data collaboratives, a number of needs were identified:
    • A responsible data framework;
    • Increased insight into different business models that may facilitate the sharing of data;
    • Capacity to tap into the potential value of data;
    • Transparent stock of available data supply; and
    • Mapping emerging practices and models of sharing.

Vogel, N., Theisen, C., Leidig, J. P., Scripps, J., Graham, D. H., & Wolffe, G. “Mining mobile datasets to enable the fine-grained stochastic simulation of Ebola diffusion.” Paper presented at the Procedia Computer Science. 2015. http://bit.ly/1TZDroF.

  • The paper presents a research study conducted on the basis of the mobile calls records shared with researchers in the framework of the Data for Development Challenge by the mobile operator Orange.
  • The study discusses the data analysis approach in relation to developing a situation of Ebola diffusion built around “the interactions of multi-scale models, including viral loads (at the cellular level), disease progression (at the individual person level), disease propagation (at the workplace and family level), societal changes in migration and travel movements (at the population level), and mitigating interventions (at the abstract government policy level).”
  • The authors argue that the use of their population, mobility, and simulation models provide more accurate simulation details in comparison to high-level analytical predictions and that the D4D mobile datasets provide high-resolution information useful for modeling developing regions and hard to reach locations.

Welle Donker, F., van Loenen, B., & Bregt, A. K. “Open Data and Beyond.” ISPRS International Journal of Geo-Information, 5(4). 2016. http://bit.ly/22YtugY.

  • This research has developed a monitoring framework to assess the effects of open (private) data using a case study of a Dutch energy network administrator Liander.
  • Focusing on the potential impacts of open private energy data – beyond ‘smart disclosure’ where citizens are given information only about their own energy usage – the authors identify three attainable strategic goals:
    • Continuously optimize performance on services, security of supply, and costs;
    • Improve management of energy flows and insight into energy consumption;
    • Help customers save energy and switch over to renewable energy sources.
  • The authors propose a seven-step framework for assessing the impacts of Liander data, in particular, and open private data more generally:
    • Develop a performance framework to describe what the program is about, description of the organization’s mission and strategic goals;
    • Identify the most important elements, or key performance areas which are most critical to understanding and assessing your program’s success;
    • Select the most appropriate performance measures;
    • Determine the gaps between what information you need and what is available;
    • Develop and implement a measurement strategy to address the gaps;
    • Develop a performance report which highlights what you have accomplished and what you have learned;
    • Learn from your experiences and refine your approach as required.
  • While the authors note that the true impacts of this open private data will likely not come into view in the short term, they argue that, “Liander has successfully demonstrated that private energy companies can release open data, and has successfully championed the other Dutch network administrators to follow suit.”

World Economic Forum, 2015. “Data-driven development: pathways for progress.” Geneva: World Economic Forum. http://bit.ly/1JOBS8u

  • This report captures an overview of the existing data deficit and the value and impact of big data for sustainable development.
  • The authors of the report focus on four main priorities towards a sustainable data revolution: commercial incentives and trusted agreements with public- and private-sector actors; the development of shared policy frameworks, legal protections and impact assessments; capacity building activities at the institutional, community, local and individual level; and lastly, recognizing individuals as both produces and consumers of data.

Value and Vulnerability: The Internet of Things in a Connected State Government


Pressrelease: “The National Association of State Chief Information Officers (NASCIO) today released a policy brief on the Internet of Things (IoT) in state government. The paper focuses on the different ways state governments are using IoT now and in the future and the policy considerations involved.

“In NASCIO’s 2015 State CIO Survey, we asked state CIOs to what extent IoT was on their agenda. Just over half said they were in informal discussions, however only one in five had moved to the formal discussion phase. We believe IoT needs to be a formal part of each state’s policy considerations,” explained NASCIO Executive Director Doug Robinson.

The paper encourages state CIOs to make IoT part of the enterprise architecture discussions on asset management and risk assessment and to develop an IoT roadmap.

“Cities and municipalities have been working toward the designation of ‘smart city’ for a while now,” said Darryl Ackley, cabinet secretary for the New Mexico Department of Information Technology and NASCIO president. “While states provide different services than cities, we are seeing a lot of activity around IoT to improve citizen services and we see great potential for growth. The more organized and methodical states can be about implementing IoT, the more successful and useful the outcomes.”

Read the policy brief at www.NASCIO.org/ValueAndVulnerability 

Twiplomacy Study 2016


Executive Summary: “Social media has become diplomacy’s significant other. It has gone from being an afterthought to being the very first thought of world leaders and governments across the globe, as audiences flock to their newsfeeds for the latest news. This recent worldwide embrace of online channels has brought with it a wave of openness and transparency that has never been experienced before. Social media provides a platform for unconditional communication, and has become a communicator’s most powerful tool. Twitter in particular, has even become a diplomatic ‘barometer, a tool used to analyze and forecast international relations.

There is a vast array of social networks for government communicators to choose from. While some governments and foreign ministries still ponder the pros and cons of any social media engagement, others have gone beyond Twitter, Facebook and Instagram to reach their target audiences, even embracing emerging platforms such as Snapchat, WhatsApp and Telegram where communications are under the radar and almost impossible to track.

Burson-Marsteller’s 2016 Twiplomacy study has been expanded to include other social media platforms such as Facebook, Instagram and YouTube, as well as more niche digital diplomacy platforms such as Snapchat, LinkedIn, Google+,Periscope and Vine.

There is a growing digital divide between governments that are active on social media with dedicated teams and those that see digital engagement as an afterthought and so devote few resources to it. There is still a small number of government leaders who refuse to embrace the new digital world and, for these few, their community managers struggle to bring their organizations into the digital century.

Over the past year, the most popular world leaders on social media have continued to increase their audiences, while new leaders have emerged in the Twittersphere. Argentina’s Mauricio Macri, Canada’s Justin Trudeau and U.S. President Barack Obama have all made a significant impact on Twitter and Facebook over the past year.

Obama’s social media communication has become even more personal through his @POTUS Twitter account and Facebook page, and the first “president of the social media age” will leave the White House in January 2017 with an incredible 137 million fans, followers and subscribers. Beyond merely Twitter and Facebook, world leaders such as the Argentinian President have also become active on new channels like Snapchat to reach a younger audience and potential future voters. Similarly, a number of governments, mainly in Latin America, have started to use Periscope, a cost-effective medium to live-stream their press conferences.

We have witnessed occasional public interactions between leaders, namely the friendly fighting talk between the Obamas, the Queen of England and Canada’s Justin Trudeau. Foreign ministries continue to expand their diplomatic and digital networks by following each other and creating coalitions on specific topics, in particular the fight against ISIS….

A number of world leaders, including the President of Colombia and Australia’s Julie Bishop, also use emojis to brighten up their tweets, creating what can be described as a new diplomatic sign language. The Foreign Ministry in Finland has even produced its own set of 49 emoticons depicting summer and winter in the Nordic country.

We asked a number of digital leaders of some of the best connected foreign ministries and governments to share their thoughts on their preferred social media channel and examples of their best campaigns on our blog. You will learn:

Here is our list of the #Twiplomacy Top Twenty Twitterati in 2016….(More)”

Improving patient care by bridging the divide between doctors and data scientists


 at the Conversation: “While wonderful new medical discoveries and innovations are in the news every day, doctors struggle daily with using information and techniques available right now while carefully adopting new concepts and treatments. As a practicing doctor, I deal with uncertainties and unanswered clinical questions all the time….At the moment, a report from the National Academy of Medicine tells us, most doctors base most of their everyday decisions on guidelines from (sometimes biased) expert opinions or small clinical trials. It would be better if they were from multicenter, large, randomized controlled studies, with tightly controlled conditions ensuring the results are as reliable as possible. However, those are expensive and difficult to perform, and even then often exclude a number of important patient groups on the basis of age, disease and sociological factors.

Part of the problem is that health records are traditionally kept on paper, making them hard to analyze en masse. As a result, most of what medical professionals might have learned from experiences was lost – or at least was inaccessible to another doctor meeting with a similar patient.

A digital system would collect and store as much clinical data as possible from as many patients as possible. It could then use information from the past – such as blood pressure, blood sugar levels, heart rate and other measurements of patients’ body functions – to guide future doctors to the best diagnosis and treatment of similar patients.

Industrial giants such as Google, IBM, SAP and Hewlett-Packard have also recognized the potential for this kind of approach, and are now working on how to leverage population data for the precise medical care of individuals.

Collaborating on data and medicine

At the Laboratory of Computational Physiology at the Massachusetts Institute of Technology, we have begun to collect large amounts of detailed patient data in the Medical Information Mart in Intensive Care (MIMIC). It is a database containing information from 60,000 patient admissions to the intensive care units of the Beth Israel Deaconess Medical Center, a Boston teaching hospital affiliated with Harvard Medical School. The data in MIMIC has been meticulously scoured so individual patients cannot be recognized, and is freely shared online with the research community.

But the database itself is not enough. We bring together front-line clinicians (such as nurses, pharmacists and doctors) to identify questions they want to investigate, and data scientists to conduct the appropriate analyses of the MIMIC records. This gives caregivers and patients the best individualized treatment options in the absence of a randomized controlled trial.

Bringing data analysis to the world

At the same time we are working to bring these data-enabled systems to assist with medical decisions to countries with limited health care resources, where research is considered an expensive luxury. Often these countries have few or no medical records – even on paper – to analyze. We can help them collect health data digitally, creating the potential to significantly improve medical care for their populations.

This task is the focus of Sana, a collection of technical, medical and community experts from across the globe that is also based in our group at MIT. Sana has designed a digital health information system specifically for use by health providers and patients in rural and underserved areas.

At its core is an open-source system that uses cellphones – common even in poor and rural nations – to collect, transmit and store all sorts of medical data. It can handle not only basic patient data such as height and weight, but also photos and X-rays, ultrasound videos, and electrical signals from a patient’s brain (EEG) and heart (ECG).

Partnering with universities and health organizations, Sana organizes training sessions (which we call “bootcamps”) and collaborative workshops (called “hackathons”) to connect nurses, doctors and community health workers at the front lines of care with technology experts in or near their communities. In 2015, we held bootcamps and hackathons in Colombia, Uganda, Greece and Mexico. The bootcamps teach students in technical fields like computer science and engineering how to design and develop health apps that can run on cellphones. Immediately following the bootcamp, the medical providers join the group and the hackathon begins…At the end of the day, though, the purpose is not the apps….(More)

How to implement “open innovation” in city government


Victor Mulas at the Worldbank: “City officials are facing increasingly complex challenges. As urbanization rates grow, cities face higher demand for services from a larger and more densely distributed population. On the other hand, rapid changes in the global economy are affecting cities that struggle to adapt to these changes, often resulting in economic depression and population drain.

“Open innovation” is the latest buzz word circulating in forums on how to address the increased volume and complexity of challenges for cities and governments in general.

But, what is open innovation?

Traditionally, public services were designed and implemented by a group of public officials. Open innovation allows us to design these services with multiple actors, including those who stand to benefit from the services, resulting in more targeted and better tailored services, often implemented through partnership with these stakeholders. Open innovation allows cities to be more productive in providing services while addressing increased demand and higher complexity of services to be delivered.

New York, Barcelona, Amsterdam and many other cities have been experimenting with this concept, introducing challenges for entrepreneurs to address common problems or inviting stakeholders to co-create new services.   Open innovation has gone from being a “buzzword” to another tool in the city officials’ toolbox.

However, even cities that embrace open innovation are still struggling to implement it beyond a few specific areas.  This is understandable, as introducing open innovation practically requires a new way of doing things for city governments, which tend to be complex and bureaucratic organizations.

Counting with an engaged mayor is not enough to bring this kind of transformation. Changing the behavior of city officials requires their buy-in, it can’t be done top down

We have been introducing open innovation to cities and governments for the last three years in Chile, Colombia, Egypt and Mozambique. We have addressed specific challenges and iteratively designed and tested a systematic methodology to introduce open innovation in government through both a top-down and a bottom-up approaches. We have tested this methodology in Colombia (Cali, Barranquilla and Manizales) and Chile (metropolitan area of Gran Concepción).   We have identified “internal champions” (i.e., government officials who advocate the new methodology), and external stakeholders organized in an “innovation hub” that provides long-term sustainability and scalability of interventions. We believe that this methodology is easily applicable beyond cities to other government entities at the regional and national levels. …To understand how the methodology practically works, we describe in this report the process and its results in its application in the city area of Gran Concepción, in Chile. For this activity, the urban transport sector was selected and the target of intervention were the regional and municipal government departments in charge or urban transport in the area of Gran Concepción. The activity in Chile resulted in a threefold impact:

  1. It catalyzed the adoption of the bottom-up smart city model following this new methodology throughout Chile; and
  2. It expanded the implementation and mainstreaming of the methodologies developed and tested through this activity in other World Bank projects.

More information about this activity in Chile can be found in the Smart City Gran Concepcion webpage…(More)”

Open data + increased disclosure = better public-private partnerships


David Bloomgarden and Georg Neumann at Fomin Blog: “The benefits of open and participatory public procurement are increasingly being recognized by international bodies such as the Group of 20 major economies, the Organisation for Economic Co-operation and Development, and multilateral development banks. Value for money, more competition, and better goods and services for citizens all result from increased disclosure of contract data. Greater openness is also an effective tool to fight fraud and corruption.

However, because public-private partnerships (PPPs) are planned during a long timeframe and involve a large number of groups, therefore, implementing greater levels of openness in disclosure is complicated. This complexity can be a challenge to good design. Finding a structured and transparent approach to managing PPP contract data is fundamental for a project to be accepted and used by its local community….

In open contracting, all data is disclosed during the public procurement process—from the planning stage, to the bidding and awarding of the contract, to the monitoring of the implementation. A global open source data standard is used to publish that data, which is already being implemented in countries as diverse as Canada, Paraguay, and the Ukraine. Using open data throughout the contracting process provides opportunities to innovate in managing bids, fixing problems, and integrating feedback as needed. Open contracting contributes to the overall social and environmental sustainability of infrastructure investments.

In the case of Mexico’s airport, the project publishes details of awarded contracts, including visualizing the flow of funds and detailing the full amounts of awarded contracts and renewable agreements. Standardized, timely, and open data that follow global standards such as the Open Contracting Data Standard will make this information useful for analysis of value for money, cost-benefit, sustainability, and monitoring performance. Crucially, open contracting will shift the focus from the inputs into a PPP, to the outputs: the goods and services being delivered.

Benefits of open data for PPPs

We think that better and open data will lead to better PPPs. Here’s how:

1. Using user feedback to fix problems

The Brazilian state of Minas Gerais has been a leader in transparent PPP contracts with full proactive disclosure of the contract terms, as well as of other relevant project information—a practice that puts a government under more scrutiny but makes for better projects in the long run.

According to Marcos Siqueira, former head of the PPP Unit in Minas Gerais, “An adequate transparency policy can provide enough information to users so they can become contract watchdogs themselves.”

For example, a public-private contract was signed in 2014 to build a $300 million waste treatment plant for 2.5 million people in the metropolitan area of Belo Horizonte, the capital of Minas Gerais. As the team members conducted appraisals, they disclosed them on the Internet. In addition, the team held around 20 public meetings and identified all the stakeholders in the project. One notable result of the sharing and discussion of this information was the relocation of the facility to a less-populated area. When the project went to the bidding phase, it was much closer to the expectations of its various stakeholders.

2. Making better decisions on contracts and performance

Chile has been a leader in developing PPPs (which it refers to as concessions) for several decades, in a range of sectors: urban and inter-urban roads, seaports, airports, hospitals, and prisons. The country tops the list for the best enabling environment for PPPs in Latin America and the Caribbean, as measured by Infrascope, an index produced by the Economist Intelligence Unit and the Multilateral Investment Fund of the IDB Group.

Chile’s distinction is that it discloses information on performance of PPPs that are underway. The government’s Concessions Unit regularly publishes summaries of the projects during their different phases, including construction and operation. The reports are non-technical, yet include all the necessary information to understand the scope of the project…(More)”

Case Studies of Government Use of Big Data in Latin America: Brazil and Mexico


Chapter by Roberto da Mota Ueti, Daniela Fernandez Espinosa, Laura Rafferty, Patrick C. K. Hung in Big Data Applications and Use Cases: “Big Data is changing our world with masses of information stored in huge servers spread across the planet. This new technology is changing not only companies but governments as well. Mexico and Brazil, two of the most influential countries in Latin America, are entering a new era and as a result, facing challenges in all aspects of public policy. Using Big Data, the Brazilian Government is trying to decrease spending and use public money better by grouping public information with stored information on citizens in public services. With new reforms in education, finances and telecommunications, the Mexican Government is taking on a bigger role in efforts to channel the country’s economic policy into an improvement of the quality of life of their habitants. It is known that technology is an important part for sub-developed countries, who are trying to make a difference in certain contexts such as reducing inequality or regulating the good usage of economic resources. The good use of Big Data, a new technology that is in charge of managing a big quantity of information, can be crucial for the Mexican Government to reach the goals that have been set in the past under Peña Nieto’s administration. This article focuses on how the Brazilian and Mexican Governments are managing the emerging technologies of Big Data and how it includes them in social and industrial projects to enhance the growth of their economies. The article also discusses the benefits of these uses of Big Data and the possible problems that occur related to security and privacy of information….(More)’

Regulatory Transformations: An Introduction


Chapter by Bettina Lange and Fiona Haines in the book Regulatory Transformations: “Regulation is no longer the prerogative of either states or markets. Increasingly citizens in association with businesses catalyse regulation which marks the rise of a social sphere in regulation. Around the world, in San Francisco, Melbourne, Munich and Mexico City, citizens have sought to transform how and to what end economic transactions are conducted. For instance, ‘carrot mob’ initiatives use positive economic incentives, not provided by a state legal system, but by a collective of civil society actors in order to change business behaviour. In contrast to ‘negative’ consumer boycotts, ‘carrotmob’ events use ‘buycotts’. They harness competition between businesses as the lever for changing how and for what purpose business transactions are conducted. Through new social media ‘carrotmobs’ mobilize groups of citizens to purchase goods at a particular time in a specific shop. The business that promises to spend the greatest percentage of its takings on, for instance, environmental improvements, such as switching to a supplier of renewable energy, will be selected for an organized shopping spree and financially benefit from the extra income it receives from the ‘carrot mob’ event.’Carrot mob’ campaigns chime with other fundamental challenges to conventional economic activity, such as the shared use of consumer goods through citizens collective consumption which questions traditional conceptions of private property….(More; Other Chapters)”

 

Offshore Leaks Database


“This ICIJ database contains information on almost 320,000 offshore entities that are part of the Panama Papers and the Offshore Leaks investigations. The data covers nearly 40 years up to the end of 2015 and links to people and companies in more than 200 countries and territories.

DISCLAIMER

There are legitimate uses for offshore companies and trusts. We do not intend to suggest or imply that any persons, companies or other entities included in the ICIJ Offshore Leaks Database have broken the law or otherwise acted improperly. Many people and entities have the same or similar names. We suggest you confirm the identities of any individuals or entities located in the database based on addresses or other identifiable information. If you find an error in the database please get in touch with us….(More)”

Citizen scientists aid Ecuador earthquake relief


Mark Zastrow at Nature: “After a magnitude-7.8 earthquake struck Ecuador’s Pacific coast on 16 April, a new ally joined the international relief effort: a citizen-science network called Zooniverse.

On 25 April, Zooniverse launched a website that asks volunteers to analyse rapidly-snapped satellite imagery of the disaster, which led to more than 650 reported deaths and 16,000 injuries. The aim is to help relief workers on the ground to find the most heavily damaged regions and identify which roads are passable.

Several crisis-mapping programmes with thousands of volunteers already exist — but it can take days to train satellites on the damaged region and to transmit data to humanitarian organizations, and results have not always proven useful. The Ecuador quake marked the first live public test for an effort dubbed the Planetary Response Network (PRN), which promises to be both more nimble than previous efforts, and to use more rigorous machine-learning algorithms to evaluate the quality of crowd-sourced analyses.

The network relies on imagery from the satellite company Planet Labs in San Francisco, California, which uses an array of shoebox-sized satellites to map the planet. In order to speed up the crowd-sourced process, it uses the Zooniverse platform to distribute the tasks of spotting features in satellite images. Machine-learning algorithms employed by a team at the University of Oxford, UK, then classify the reliability of each volunteer’s analysis and weight their contributions accordingly.

Rapid-fire data

Within 2 hours of the Ecuador test project going live with a first set of 1,300 images, each photo had been checked at least 20 times. “It was one of the fastest responses I’ve seen,” says Brooke Simmons, an astronomer at the University of California, San Diego, who leads the image processing. Steven Reece, who heads the Oxford team’s machine-learning effort, says that results — a “heat map” of damage with possible road blockages — were ready in another two hours.

In all, more than 2,800 Zooniverse users contributed to analysing roughly 25,000 square kilometres of imagery centred around the coastal cities of Pedernales and Bahia de Caraquez. That is where the London-based relief organization Rescue Global — which requested the analysis the day after the earthquake — currently has relief teams on the ground, including search dogs and medical units….(More)”