A New Model for Industry-Academic Partnerships


Working Paper by Gary King and Nathaniel Persily: “The mission of the academic social sciences is to understand and ameliorate society’s greatest challenges. The data held by private companies holds vast potential to further this mission. Yet, because of its interaction with highly politicized issues, customer privacy, proprietary content, and differing goals of firms and academics, these data are often inaccessible to university researchers.

We propose here a new model for industry-academic partnerships that addresses these problems via a novel organizational structure: Respected scholars form a commission which, as a trusted third party, receives access to all relevant firm information and systems, and then recruits independent academics to do research in specific areas following standard peer review protocols organized and funded by nonprofit foundations.

We also report on a partnership we helped forge under this model to make data available about the extremely visible and highly politicized issues surrounding the impact of social media on elections and democracy. In our partnership, Facebook will provide privacy-preserving data and access; seven major politically and substantively diverse nonprofit foundations will fund the research; and the Social Science Research Council will oversee the peer review process for funding and data access….(More)”.

Prediction, Judgment and Complexity


NBER Working Paper by Agrawal, Ajay and Gans, Joshua S. and Goldfarb, Avi: “We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries….(More)”.

Blockchain: Unpacking the disruptive potential of blockchain technology for human development.


IDRC white paper: “In the scramble to harness new technologies to propel innovation around the world, artificial intelligence, robotics, machine learning, and blockchain technologies are being explored and deployed in a wide variety of contexts globally.

Although blockchain is one of the most hyped of these new technologies, it is also perhaps the least understood. Blockchain is the distributed ledger — a database that is shared across multiple sites or institutions to furnish a secure and transparent record of events occurring during the provision of a service or contract — that supports cryptocurrencies (digital assets designed to work as mediums of exchange).

Blockchain is now underpinning applications such as land registries and identity services, but as its popularity grows, its relevance in addressing socio-economic gaps and supporting development targets like the globally-recognized UN Sustainable Development Goals is critical to unpack. Moreover, for countries in the global South that want to be more than just end users or consumers, the complex infrastructure requirements and operating costs of blockchain could prove challenging. For the purposes of real development, we need to not only understand how blockchain is workable, but also who is able to harness it to foster social inclusion and promote democratic governance.

This white paper explores the potential of blockchain technology to support human development. It provides a non-technical overview, illustrates a range of applications, and offers a series of conclusions and recommendations for additional research and potential development programming….(More)”.

Selected Readings on Blockchain and Identity


By Hannah Pierce and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of blockchain and identity was originally published in 2017.

The potential of blockchain and other distributed ledger technologies to create positive social change has inspired enthusiasm, broad experimentation, and some skepticism. In this edition of the Selected Readings series, we explore and curate the literature on blockchain and how it impacts identity as a means to access services and rights. (In a previous edition we considered the Potential of Blockchain for Transforming Governance).

Introduction

In 2008, an unknown source calling itself Satoshi Nakamoto released a paper named Bitcoin: A Peer-to-Peer Electronic Cash System which introduced Blockchain. Blockchain is a novel technology that uses a distributed ledger to record transactions and ensure compliance. Blockchain and other Distributed Ledger technologies (DLTs) rely on an ability to act as a vast, transparent, and secure public database.

Distributed ledger technologies (DLTs) have disruptive potential beyond innovation in products, services, revenue streams and operating systems within industry. By providing transparency and accountability in new and distributed ways, DLTs have the potential to positively empower underserved populations in myriad ways, including providing a means for establishing a trusted digital identity.

Consider the potential of DLTs for 2.4 billion people worldwide, about 1.5 billion of whom are over the age of 14, who are unable to prove identity to the satisfaction of authorities and other organizations – often excluding them from property ownership, free movement, and social protection as a result. At the same time, transition to a DLT led system of ID management involves various risks, that if not understood and mitigated properly, could harm potential beneficiaries.

Annotated Selected Reading List

Governance

Cuomo, Jerry, Richard Nash, Veena Pureswaran, Alan Thurlow, Dave Zaharchuk. “Building trust in government: Exploring the potential of blockchains.” IBM Institute for Business Value. January 2017.

This paper from the IBM Institute for Business Value culls findings from surveys conducted with over 200 government leaders in 16 countries regarding their experiences and expectations for blockchain technology. The report also identifies “Trailblazers”, or governments that expect to have blockchain technology in place by the end of the year, and details the views and approaches that these early adopters are taking to ensure the success of blockchain in governance. These Trailblazers also believe that there will be high yields from utilizing blockchain in identity management and that citizen services, such as voting, tax collection and land registration, will become increasingly dependent upon decentralized and secure identity management systems. Additionally, some of the Trailblazers are exploring blockchain application in borderless services, like cross-province or state tax collection, because the technology removes the need for intermediaries like notaries or lawyers to verify identities and the authenticity of transactions.

Mattila, Juri. “The Blockchain Phenomenon: The Disruptive Potential of Distributed Consensus Architectures.” Berkeley Roundtable on the International Economy. May 2016.

This working paper gives a clear introduction to blockchain terminology, architecture, challenges, applications (including use cases), and implications for digital trust, disintermediation, democratizing the supply chain, an automated economy, and the reconfiguration of regulatory capacity. As far as identification management is concerned, Mattila argues that blockchain can remove the need to go through a trusted third party (such as a bank) to verify identity online. This could strengthen the security of personal data, as the move from a centralized intermediary to a decentralized network lowers the risk of a mass data security breach. In addition, using blockchain technology for identity verification allows for a more standardized documentation of identity which can be used across platforms and services. In light of these potential capabilities, Mattila addresses the disruptive power of blockchain technology on intermediary businesses and regulating bodies.

Identity Management Applications

Allen, Christopher.  “The Path to Self-Sovereign Identity.” Coindesk. April 27, 2016.

In this Coindesk article, author Christopher Allen lays out the history of digital identities, then explains a concept of a “self-sovereign” identity, where trust is enabled without compromising individual privacy. His ten principles for self-sovereign identity (Existence, Control, Access, Transparency, Persistence, Portability, Interoperability, Consent, Minimization, and Protection) lend themselves to blockchain technology for administration. Although there are actors making moves toward the establishment of self-sovereign identity, there are a few challenges that face the widespread implementation of these tenets, including legal risks, confidentiality issues, immature technology, and a reluctance to change established processes.

Jacobovitz, Ori. “Blockchain for Identity Management.” Department of Computer Science, Ben-Gurion University. December 11, 2016.

This technical report discusses advantages of blockchain technology in managing and authenticating identities online, such as the ability for individuals to create and manage their own online identities, which offers greater control over access to personal data. Using blockchain for identity verification can also afford the potential of “digital watermarks” that could be assigned to each of an individual’s transactions, as well as negating the creation of unique usernames and passwords online. After arguing that this decentralized model will allow individuals to manage data on their own terms, Jacobvitz provides a list of companies, projects, and movements that are using blockchain for identity management.

Mainelli, Michael. “Blockchain Will Help Us Prove Our Identities in a Digital World.” Harvard Business Review. March 16, 2017.

In this Harvard Business Review article, author Michael Mainelli highlights a solution to identity problems for rich and poor alike–mutual distributed ledgers (MDLs), or blockchain technology. These multi-organizational data bases with unalterable ledgers and a “super audit trail” have three parties that deal with digital document exchanges: subjects are individuals or assets, certifiers are are organizations that verify identity, and inquisitors are entities that conducts know-your-customer (KYC) checks on the subject. This system will allow for a low-cost, secure, and global method of proving identity. After outlining some of the other benefits that this technology may have in creating secure and easily auditable digital documents, such as greater tolerance that comes from viewing widely public ledgers, Mainelli questions if these capabilities will turn out to be a boon or a burden to bureaucracy and societal behavior.

Personal Data Security Applications

Banafa, Ahmed. “How to Secure the Internet of Things (IoT) with Blockchain.” Datafloq. August 15, 2016.

This article details the data security risks that are coming up as the Internet of Things continues to expand, and how using blockchain technology can protect the personal data and identity information that is exchanged between devices. Banafa argues that, as the creation and collection of data is central to the functions of Internet of Things devices, there is an increasing need to better secure data that largely confidential and often personally identifiable. Decentralizing IoT networks, then securing their communications with blockchain can allow to remain scalable, private, and reliable. Enabling blockchain’s peer-to-peer, trustless communication may also enable smart devices to initiate personal data exchanges like financial transactions, as centralized authorities or intermediaries will not be necessary.

Shrier, David, Weige Wu and Alex Pentland. “Blockchain & Infrastructure (Identity, Data Security).” Massachusetts Institute of Technology. May 17, 2016.

This paper, the third of a four-part series on potential blockchain applications, covers the potential of blockchains to change the status quo of identity authentication systems, privacy protection, transaction monitoring, ownership rights, and data security. The paper also posits that, as personal data becomes more and more valuable, that we should move towards a “New Deal on Data” which provides individuals data protection–through blockchain technology– and the option to contribute their data to aggregates that work towards the common good. In order to achieve this New Deal on Data, robust regulatory standards and financial incentives must be provided to entice individuals to share their data to benefit society.

Introducing the Digital Policy Model Canvas


Blog by Stefaan Verhulst: “…Yesterday, the National Digital Policy Network of the World Economic Forum, of which I am a member,  released a White Paper aimed at facilitating this process. The paper, entitled “Digital Policy Playbook 2017: Approaches to National Digital Governance,”  examines a number of case studies from around the world to develop a “playbook” that can help leaders in designing digital policies that maximize the forthcoming opportunities and effectively meet the challenges. It is the result of a series of extensive discussions and consultations held around the world, over , and attended by leading experts from various sectors and geographies…..

How can such insights be translated into a practical and pragmatic approach to policymaking? In order to find implementable solutions, we sought to develop a “Digital Policy Model Canvas” that would guide policy makers to derive specific policies and regulatory mechanisms in an agile and iterative manner – integrating both design thinking and evidence based policy making. This notion of a canvas is borrowed from the business world. For example, in Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers, Alexander Osterwalder and Yves Pigneur introduce the idea of a “Business Model Canvas” to generate new, innovative business models that can help companies–and others–go beyond legacy systems and approaches.

Applying this approach to the world of digital policymaking and innovation, we arrive at the “Digital Policy Model Canvas” represented in the accompanying figure.

Screen Shot 2017-09-22 at 6.08.24 AM

The design and implementation of such a canvas can be applied to a specific problem and/or geographic context, and would include the following steps…(More)”.

Who serves the poor ? surveying civil servants in the developing world


Worldbank working paper by Daniel Oliver Rogger: “Who are the civil servants that serve poor people in the developing world? This paper uses direct surveys of civil servants — the professional body of administrators who manage government policy — and their organizations from Ethiopia, Ghana, Indonesia, Nigeria, Pakistan and the Philippines, to highlight key aspects of their characteristics and experience of civil service life. Civil servants in the developing world face myriad challenges to serving the world’s poor, from limited facilities to significant political interference in their work. There are a number of commonalities across service environments, and the paper summarizes these in a series of ‘stylized facts’ of the civil service in the developing world. At the same time, the particular challenges faced by a public official vary substantially across and within countries and regions. For example, measured management practices differ widely across local governments of a single state in Nigeria. Surveys of civil servants allow us to document these differences, build better models of the public sector, and make more informed policy choices….(More)”.

Opportunities and risks in emerging technologies


White Paper Series at the WebFoundation: “To achieve our vision of digital equality, we need to understand how new technologies are shaping society; where they present opportunities to make people’s lives better, and indeed where they threaten to create harm. To this end, we have commissioned a series of white papers examining three key digital trends: artificial intelligence, algorithms and control of personal data. The papers focus on low and middle-income countries, which are all too often overlooked in debates around the impacts of emerging technologies.

The series addresses each of these three digital issues, looking at how they are impacting people’s lives and identifying steps that governments, companies and civil society organisations can take to limit the harms, and maximise benefits, for citizens.

Download the white papers

We will use these white papers to refine our thinking and set our work agenda on digital equality in the years ahead. We are sharing them openly with the hope they benefit others working towards our goals and to amplify the limited research currently available on digital issues in low and middle-income countries. We intend the papers to foster discussion about the steps we can take together to ensure emerging digital technologies are used in ways that benefit people’s lives, whether they are in Los Angeles or Lagos….(More)”.

Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation


Report by Samantha Bradshaw and Philip N. Howard: “Cyber troops are government, military or political party teams committed to manipulating public opinion over social media. In this working paper, we report on specific organizations created, often with public money, to help define and manage what is in the best interest of the public. We compare such organizations across 28 countries, and inventory them according to the kinds of messages, valences and communication strategies used. We catalogue their organizationalforms and evaluate their capacities in terms of budgets and staffing. This working paper summarizes the findings of the first comprehensive inventory of the major organizations behind social media manipulation. We find that cyber troops are a pervasive and global phenomenon. Many different countries employ significant numbers of people and resources to manage and manipulate public opinion online, sometimes targeting domestic audiences and sometimes targeting foreign publics.

  •  The earliest reports of organized social media manipulation emerged in 2010, and by 2017 there are details on such organizations in 28 countries.
  • Looking across the 28 countries, every authoritarian regime has social media campaigns targeting their own populations, while only a few of them target foreign publics. In contrast, almost every democracy in this sample has organized social media campaigns that target foreign publics, while political‐party‐supported campaigns target domestic voters. 
  • Authoritarian regimes are not the only or even the best at organized social media manipulation. The earliest reports of government involvement in nudging public opinion involve democracies, and new innovations in political communication technologies often come from political parties and arise during high‐profile elections.
  • Over time, the primary mode for organizing cyber troops has gone from involving military units that experiment with manipulating public opinion over social media networks to strategic communication firms that take contracts from governments for social media campaigns….(More)”

The Global Open Data Index 2016/2017 – Advancing the State of Open Data Through Dialogue


Open Knowledge International: “The Global Open Data Index (GODI) is the annual global benchmark for publication of open government data, run by the Open Knowledge Network. Our crowdsourced survey measures the openness of government data according to the Open Definition.

By having a tool that is run by civil society, GODI creates valuable insights for government’s data publishers to understand where they have data gaps. It also shows how to make data more useable and eventually more impactful. GODI therefore provides important feedback that governments are usually lacking.

For the last 5 years we have been revising GODI methodology to fit the changing needs of the open data movement. This year, we changed our entire survey design by adding experimental questions to assess data findability and usability. We also improved our datasets definitions by looking at essential data points that can solve real world problems. Using more precise data definitions also increased the reliability of our cross-country comparison. See all about the GODI methodology here

In addition, this year shall be more than a mere measurement tool. We see it as a tool for conversation. To spark debate, we release GODI in two phases:

  1. The dialogue phase – We are releasing the data to the public after a rigorous review. Yet, like everyone, our work is not assessment in not always perfect. We give all users a chance to contest the index results for 30 days, starting May 2nd. In this period, users of the index can comment on our assessments through our Global Open Data Index forum. On June 2nd, we will review those comments and will change some index submissions if needed.
  2. The final results – on June 15 we will present the final results of the index. For the first time ever, we will also publish the GODI white paper. This paper will include our main findings and recommendations to advance open data publication….

… findings from this year’s GODI

  • GODI highlights data gaps. Open data is the final stage of an information production chain, where governments measure and collect data, process and share data internally, and publish this data openly. While being designed to measure open data, the Index also highlights gaps in this production chain. Does a government collect data at all? Why is data not collected? Some governments lack the infrastructure and resources to modernise their information systems; other countries do not have information systems in place at all.
  • Data findability is a major challenge. We have data portals and registries, but government agencies under one national government still publish data in different ways and different locations. Moreover, they have different protocols for license and formats. This has a hazardous impact – we may not find open data, even if it is out there, and therefore can’t use it. Data findability is a prerequisite for open data to fulfill its potential and currently most data is very hard to find.
  • A lot of ‘data’ IS online, but the ways in which it is presented are limiting their openness. Governments publish data in many forms, not only as tabular datasets but also visualisations, maps, graphs and texts. While this is a good effort to make data relatable, it sometimes makes the data very hard or even impossible for reuse. It is crucial for governments to revise how they produce and provide data that is in good quality for reuse in its raw form. For that, we need to be aware what is best raw data required which varies from data category to category.
  • Open licensing is a problem, and we cannot assess public domain status. Each year we find ourselves more confused about open data licences. On the one hand, more governments implement their unique open data license versions. Some of them are compliant with the Open Definition, but most are not officially acknowledged. On the other hand, some governments do not provide open licenses, but terms of use, that may leave users in the dark about the actual possibilities to reuse data. There is a need to draw more attention to data licenses and make sure data producers understand how to license data better….(More)”.

Human Decisions and Machine Predictions


NBER Working Paper by Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainatha: “We examine how machine learning can be used to improve and understand human decision-making. In particular, we focus on a decision that has important policy consequences. Millions of times each year, judges must decide where defendants will await trial—at home or in jail. By law, this decision hinges on the judge’s prediction of what the defendant would do if released. This is a promising machine learning application because it is a concrete prediction task for which there is a large volume of data available. Yet comparing the algorithm to the judge proves complicated. First, the data are themselves generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the single variable that the algorithm focuses on; for instance, judges may care about racial inequities or about specific crimes (such as violent crimes) rather than just overall crime risk. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: a policy simulation shows crime can be reduced by up to 24.8% with no change in jailing rates, or jail populations can be reduced by 42.0% with no increase in crime rates. Moreover, we see reductions in all categories of crime, including violent ones. Importantly, such gains can be had while also significantly reducing the percentage of African-Americans and Hispanics in jail. We find similar results in a national dataset as well. In addition, by focusing the algorithm on predicting judges’ decisions, rather than defendant behavior, we gain some insight into decision-making: a key problem appears to be that judges to respond to ‘noise’ as if it were signal. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals….(More)”