Using digital technologies to improve the design and enforcement of public policies

OECD Digital Economy Paper: “Digitalisation is having a profound impact on social and economic activity. While often benefiting from a very long history of public investment in R&D, digitalisation has been largely driven by the private sector. However, the combined adoption of new digital technologies, increased reliance upon new data sources, and use of advanced analytic methods hold significant potential to: i) improve the effectiveness and enforcement of public policies; ii) enable innovative policy design and impact evaluation, and; iii) expand citizen and stakeholder engagement in policy making and implementation. These benefits are likely to be greatest in policy domains where outcomes are only observable at significant cost and/or where there is significant heteroregeneity in responses across different agents. In this paper we provide a review of initiatives across a number of fields including: competition, education, environment, innovation, and taxation….(More)”.

Claudette: an automated detector of potentially unfair clauses in online terms of service

Marco Lippi et al in AI and the Law Journal: “Terms of service of on-line platforms too often contain clauses that are potentially unfair to the consumer. We present an experimental study where machine learning is employed to automatically detect such potentially unfair clauses. Results show that the proposed system could provide a valuable tool for lawyers and consumers alike….(More)”.

Achieving Digital Permanence

Raymond Blum with Betsy Beyer at ACM Queu: “Digital permanence has become a prevalent issue in society. This article focuses on the forces behind it and some of the techniques to achieve a desired state in which “what you read is what was written.” While techniques that can be imposed as layers above basic data stores—blockchains, for example—are valid approaches to achieving a system’s information assurance guarantees, this article won’t discuss them.

First, let’s define digital permanence and the more basic concept of data integrity.

Data integrity is the maintenance of the accuracy and consistency of stored information. Accuracy means that the data is stored as the set of values that were intended. Consistency means that these stored values remain the same over time—they do not unintentionally waver or morph as time passes.

Digital permanence refers to the techniques used to anticipate and then meet the expected lifetime of data stored in digital media. Digital permanence not only considers data integrity, but also targets guarantees of relevance and accessibility: the ability to recall stored data and to recall it with predicted latency and at a rate acceptable to the applications that require that information.

To illustrate the aspects of relevance and accessibility, consider two counterexamples: journals that were safely stored redundantly on Zip drives or punch cards may as well not exist if the hardware required to read the media into a current computing system isn’t available. Nor is it very useful to have receipts and ledgers stored on a tape medium that will take eight days to read in when you need the information for an audit on Thursday.

The Multiple Facets of Digital Permanence

Human memory is the most subjective record imaginable. Common adages and clichés such as “He said, she said,” “IIRC (If I remember correctly),” and “You might recall” recognize the truth of memories—that they are based only on fragments of the one-time subjective perception of any objective state of affairs. What’s more, research indicates that people alter their memories over time. Over the years, as the need to provide a common ground for actions based on past transactions arises, so does the need for an objective record of fact—an independent “true” past. These records must be both immutable to a reasonable degree and durable. Media such as clay tablets, parchment, photographic prints, and microfiche became popular because they satisfied the “write once, read many” requirement of society’s record keepers.

Information storage in the digital age has evolved to fit the scale of access (frequent) and volume (high) by moving to storage media that record and deliver information in an almost intangible state. Such media have distinct advantages: electrical impulses and the polarity of magnetized ferric compounds can be moved around at great speed and density. These media, unfortunately, also score higher in another measure: fragility. Paper and clay can survive large amounts of neglect and punishment, but a stray electromagnetic discharge or microscopic rupture can render a digital library inaccessible or unrecognizable.

It stands to reason that storing permanent records in some immutable and indestructible medium would be ideal—something that, once altered to encode information, could never be altered again, either by an overwrite or destruction. Experience shows that such ideals are rarely realized; with enough force and will, the hardest stone can be broken and the most permanent markings defaced.

In considering and ensuring digital permanence, you want to guard against two different failures: the destruction of the storage medium, and a loss of the integrity or “truthfulness” of the records….(More)”.

Leveraging and Sharing Data for Urban Flourishing

Testimony by Stefaan Verhulst before New York City Council Committee on Technology and the Commission on Public Information and Communication (COPIC): “We live in challenging times. From climate change to economic inequality, the difficulties confronting New York City, its citizens, and decision-makers are unprecedented in their variety, and also in their complexity and urgency. Our standard policy toolkit increasingly seems stale and ineffective. Existing governance institutions and mechanisms seem outdated and distrusted by large sections of the population.

To tackle today’s problems we need not only new solutions but also new methods for arriving at solutions. Data can play a central role in this task. Access to and the use of data in a trusted and responsible manner is central to meeting the challenges we face and enabling public innovation.

This hearing, called by the Technology Committee and the Commission on Public Information and Communication, is therefore timely and very important. It is my firm belief that rapid progress on developing an effective data sharing framework is among the most important steps our New York City leaders can take to tackle the myriad of 21st challenges....

I am joined today by some of my distinguished NYU colleagues, Prof. Julia Lane and Prof. Julia Stoyanovich, who have worked extensively on the technical and privacy challenges associated with data sharing. I will, therefore, avoid duplicating our testimonies and won’t focus on issues of privacy, trust and how to establish a responsible data sharing infrastructure, while these are central considerations for the type of data-driven approaches I will discuss. I am, of course, happy to elaborate on these topics during the question and answer session.

Instead, I want to focus on four core issues associated with data collaboration. I phrase these issues as answers to four questions. For each of these questions, I also provide a set of recommended actions that this Committee could consider undertaking or studying.

The four core questions are:

  • First, why should NYC care about data and data sharing?
  • Second, if you build a data-sharing framework, will they come?
  • Third, how can we best engage the private sector when it comes to sharing and using their data?
  • And fourth, is technology is the main (or best) answer?…(More)”.

2018 Global Go To Think Tank Index Report

Report by James G. McGann: “The Think Tanks and Civil Societies Program (TTCSP) of the Lauder Institute at the University of Pennsylvania conducts research on the role policy institutes play in governments and civil societies around the world. Often referred to as the “think tanks’ think tank,” TTCSP examines the evolving role and character of public policy research organizations. Over the last 27 years, the TTCSP has developed and led a series of global initiatives that have helped bridge the gap between knowledge and policy in critical policy areas such as international peace and security, globalization and governance, international economics, environmental issues, information and society, poverty alleviation, and healthcare and global health. These international collaborative efforts are designed to establish regional and international networks of policy institutes and communities that improve policy making while strengthening democratic institutions and civil societies around the world.

The TTCSP works with leading scholars and practitioners from think tanks and universities in a variety of collaborative efforts and programs, and produces the annual Global Go To think Tank Index that ranks the world’s leading think tanks in a variety of categories. This is achieved with the help of a panel of over 1,796 peer institutions and experts from the print and electronic media, academia, public and private donor institutions, and governments around the world. We have strong relationships with leading think tanks around the world, and our annual think Tank Index is used by academics, journalists, donors and the public to locate and connect with the leading centers of public policy research around the world. Our goal is to increase the profile and performance of think tanks and raise the public awareness of the important role think tanks play in governments and civil societies around the globe.”…(More)”.

Fact-Based Policy: How Do State and Local Governments Accomplish It?

Report and Proposal by Justine Hastings: “Fact-based policy is essential to making government more effective and more efficient, and many states could benefit from more extensive use of data and evidence when making policy. Private companies have taken advantage of declining computing costs and vast data resources to solve problems in a fact-based way, but state and local governments have not made as much progress….

Drawing on her experience in Rhode Island, Hastings proposes that states build secure, comprehensive, integrated databases, and that they transform those databases into data lakes that are optimized for developing insights. Policymakers can then use the insights from this work to sharpen policy goals, create policy solutions, and measure progress against those goals. Policymakers, computer scientists, engineers, and economists will work together to build the data lake and analyze the data to generate policy insights….(More)”.

Artificial Intelligence and National Security

Report by Congressional Research Service: “Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semi-autonomous and autonomous vehicles.

Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI.

In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes.

Potential international rivals in the AI market are creating pressure for the United States to compete for innovative military AI applications. China is a leading competitor in this regard, releasing a plan in 2017 to capture the global lead in AI development by 2030. Currently, China is primarily focused on using AI to make faster and more well-informed decisions, as well as on developing a variety of autonomous military vehicles. Russia is also active in military AI development, with a primary focus on robotics. Although AI has the potential to impart a number of advantages in the military context, it may also introduce distinct challenges.

AI technology could, for example, facilitate autonomous operations, lead to more informed military decisionmaking, and increase the speed and scale of military action. However, it may also be unpredictable or vulnerable to unique forms of manipulation. As a result of these factors, analysts hold a broad range of opinions on how influential AI will be in future combat operations.

While a small number of analysts believe that the technology will have minimal impact, most believe that AI will have at least an evolutionary—if not revolutionary—effect….(More)”.

On the ethical and political agency of online reputation systems

Paper by Anna Wilson and Stefano De Paoli at First Monday: “Social and socioeconomic interactions and transactions often require trust. In digital spaces, the main approach to facilitating trust has effectively been to try to reduce or even remove the need for it through the implementation of reputation systems. These generate metrics based on digital data such as ratings and reviews submitted by users, interaction histories, and so on, that are intended to label individuals as more or less reliable or trustworthy in a particular interaction context. We undertake a disclosive archaeology (Introna, 2014) of typical reputation systems, identifying relevant figuration agencies including affordances and prohibitions, (cyborg) identities, (cyborg) practices and discourses, in order to examine their ethico-political agency.

We suggest that conventional approaches to the design of such systems are rooted in a capitalist, competitive paradigm, relying on methodological individualism, and that the reputation technologies themselves thus embody and enact this paradigm within whatever space they operate. We question whether the politics, ethics and philosophy that contribute to this paradigm align with those of some of the contexts in which reputation systems are now being used, and suggest that alternative approaches to the establishment of trust and reputation in digital spaces need to be considered for alternative contexts….(More)”.

Thinking about GovTech: A brief guide for policymakers

Report by Tanya Filer: “If developed with care, the emergent GovTech ecosystem, in which start-ups and innovative small and medium enterprises (SMEs) provide innovative technology products and services to public sector clients, could contribute to achieving these objectives. Thinking about GovTech introduces the concept of GovTech and identifies eight activities that policymakers can undertake to foster national GovTech innovation ecosystems and help to steer them towards positive outcomes for citizens and public administrators. It suggests that policymakers:

1. Build the social and technical foundations for GovTech
2. Embed expectations of accountability at an ecosystem-wide level
3. Address GovTech procurement barriers
4. Ensure the provision of appropriate, and often patient, capital
5. Engage academia at each stage of the GovTech innovation lifecycle
6. Develop pipelines of technological talent, emphasising public sector problems and
7. Build translator capacity within the public sector
8. Develop and utilise regional and international networks

Thinking about GovTech is the first GovTech guide written for a fully international audience of policymakers. It offers examples of emerging international policy and programme design and urges policymakers to think carefully about local context and capacity for implementation….(More)”.

Toward an Open Data Demand Assessment and Segmentation Methodology

Stefaan Verhulst and Andrew Young at IADB: “Across the world, significant time and resources are being invested in making government data accessible to all with the broad goal of improving people’s lives. Evidence of open data’s impact – on improving governance, empowering citizens, creating economic opportunity, and solving public problems – is emerging and is largely encouraging. Yet much of the potential value of open data remains untapped, in part because we often do not understand who is using open data or, more importantly, who is not using open data but could benefit from the insights it may generate. By identifying, prioritizing, segmenting, and engaging with the actual and future demand for open data in a systemic and systematic way, practitioners can ensure that open data is more targeted. Understanding and meeting the demand for open data can increase overall impact and return on investment of public funds.

The GovLab, in partnership with the Inter-American Development Bank, and with the support of the French Development Agency developed the Open Data Demand and Assessment Methodology to provide open data policymakers and practitioners with an approach for identifying, segmenting, and engaging with demand. This process specifically seeks to empower data champions within public agencies who want to improve their data’s ability to improve people’s lives….(More)”.