Imagination unleashed: Democratising the knowledge economy


Report by Roberto Mangabeira Unger, Isaac Stanley, Madeleine Gabriel, and Geoff Mulgan: “If economic eras are defined by their most advanced form of production, then we live in a knowledge economy – one where knowledge plays a decisive role in the organisation of production, distribution and consumption.

The era of Fordist mass production that preceded it transformed almost every part of the economy. But the knowledge economy hasn’t spread in the same way. Only some people and places are reaping the benefits.

This is a big problem: it contributes to inequality, stagnation and political alienation. And traditional policy solutions are not sufficient to tackle it. We can’t expect benefits simply to trickle down to the rest of the population, and redistribution alone will not solve the inequalities we are facing.

What’s the alternative? Nesta has been working with Roberto Mangabeira Unger to convene discussions with politicians, researchers, and activists from member countries of the Organisation for Economic Co-operation and Development, to explore policy options for an inclusive knowledge economy. This report presents the results of that collaboration.

We argue that an inclusive knowledge economy requires action to democratise the economy – widening access to capital and productive opportunity, transforming models of ownership, addressing new concentrations of power, and democratising the direction of innovation.

It demands that we establish a social inheritance by reforming education and social security.

And it requires us to create a high-energy democracy, promoting experimental government, and independent and empowered civil society.

Recommendations

This is a broad ranging agenda. In practice, it focuses on:

  • SMEs and their capacity and skills – greatly accelerating the adoption of new methods and technologies at every level of the economy, including new clean technologies that reduce carbon emissions
  • Transforming industrial policy to cope with the new concentrations of power and to prevent monopoly and predatory behaviours
  • Transforming and disaggregating property rights so that more people can have a stake in productive resources
  • Reforming education to prepare the next generation for the labour market of the future not the past – cultivating the mindsets, skills and cultures relevant to future jobs
  • Reforming social policy to respond to new patterns of work and need – creating more flexible systems that can cope with rapid change in jobs and skills, with a greater emphasis on reskilling
  • Reforming government and democracy to achieve new levels of participation, agility, experimentation and effectiveness…(More)”

A Skeptical View of Information Fiduciaries


Paper by Lina Khan and David Pozen: “The concept of “information fiduciaries” has surged to the forefront of debates on online platform regulation. Developed by Professor Jack Balkin, the concept is meant to rebalance the relationship between ordinary individuals and the digital companies that accumulate, analyze, and sell their personal data for profit. Just as the law imposes special duties of care, confidentiality, and loyalty on doctors, lawyers, and accountants vis-à-vis their patients and clients, Balkin argues, so too should it impose special duties on corporations such as Facebook, Google, and Twitter vis-à-vis their end users. Over the past several years, this argument has garnered remarkably broad support and essentially zero critical pushback.

This Essay seeks to disrupt the emerging consensus by identifying a number of lurking tensions and ambiguities in the theory of information fiduciaries, as well as a number of reasons to doubt the theory’s capacity to resolve them satisfactorily. Although we agree with Balkin that the harms stemming from dominant online platforms call for legal intervention, we question whether the concept of information fiduciaries is an adequate or apt response to the problems of information insecurity that he stresses, much less to more fundamental problems associated with outsized market share and business models built on pervasive surveillance. We also call attention to the potential costs of adopting an information-fiduciary framework—a framework that, we fear, invites an enervating complacency toward online platforms’ structural power and a premature abandonment of more robust visions of public regulation….(More)”.

Data Trusts as an AI Governance Mechanism


Paper by Chris Reed and Irene YH Ng: “This paper is a response to the Singapore Personal Data Protection Commission consultation on a draft AI Governance Framework. It analyses the five data trust models proposed by the UK Open Data Institute and identifies that only the contractual and corporate models are likely to be legally suitable for achieving the aims of a data trust.

The paper further explains how data trusts might be used as in the governance of AI, and investigates the barriers which Singapore’s data protection law presents to the use of data trusts and how those barriers might be overcome. Its conclusion is that a mixed contractual/corporate model, with an element of regulatory oversight and audit to ensure consumer confidence that data is being used appropriately, could produce a useful AI governance tool…(More)”.

Some notes on smart cities and the corporatization of urban governance


Presentation by Constance Carr and Markus Hesse: “We want to address a discrepancy; that is, the discrepancy between processes and practices of technological development on one hand and/or production processes of urban change and urban problems on the other. There’s a gap here, that we can illustrate with the case of the so called“Google City”.

The scholarly literature on digital cities is quite clear that there are externalities, uncertainties and risks associated with the hype around, and the rash introduction of, ‘smartness’. To us, an old saying comes to mind: Don’t put the wagon before the horse.

Obviously, digitization and technology have revolutionized geography in many ways. And, this is nothing new. Roughly twenty years ago, with the rise of the Internet, some, such as MIT’s Bill Mitchell (1995), speculated that it and other ITs would eradicate space into the ‘City of Bits’. However, even back then statements like these didn’t go uncriticised by those who pointed at the inherent technological determinism and exposed that there is a complex relationship between urban development, urban planning, and technological innovation; that the relationship was neither new, nor trivial such that tech, itself, would automatically and necessarily be productive, beneficial, and central to cities.

What has changed is the proliferation of digital technologies and their applications. We agree with Ash et al. (2016) that geography has experienced a ‘digital turn’ where urban geography now produced by, through and of digitization. And, while digitalization of urbanity has provided benefits, it has also come sidelong a number of unsolved problems.

First, behind the production of big data, algorithms, and digital design, there are certain epistemologies – ways of knowing. Data is not value-free. Rather, data is an end product of political and associated methods of framing that structure the production of data. So, now that we “live in a present characterized by a […] diverse array of spatially-enabled digital devices, platforms, applications and services,” (Ash et al. 2016: 28), we can interrogate how these processes and algorithms are informed by socio-economic inequalities, because the risk is that new technologies will simply reproduce them.

Second, the circulation of data around the globe invokes questions about who owns and regulates them when stored and processed in remote geographic locations….(More)”.

Regulating disinformation with artificial intelligence


Paper for the European Parliamentary Research Service: “This study examines the consequences of the increasingly prevalent use of artificial intelligence (AI) disinformation initiatives upon freedom of expression, pluralism and the functioning of a democratic polity. The study examines the trade-offs in using automated technology to limit the spread of disinformation online. It presents options (from self-regulatory to legislative) to regulate automated content recognition (ACR) technologies in this context. Special attention is paid to the opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency and respects free speech. The present project reviews some of the key academic and policy ideas on technology and disinformation and highlights their relevance to European policy.

Chapter 1 introduces the background to the study and presents the definitions used. Chapter 2 scopes the policy boundaries of disinformation from economic, societal and technological perspectives, focusing on the media context, behavioural economics and technological regulation. Chapter 3 maps and evaluates existing regulatory and technological responses to disinformation. In Chapter 4, policy options are presented, paying particular attention to interactions between technological solutions, freedom of expression and media pluralism….(More)”.

Toward an Open Data Bias Assessment Tool Measuring Bias in Open Spatial Data


Working Paper by Ajjit Narayanan and Graham MacDonald: “Data is a critical resource for government decisionmaking, and in recent years, local governments, in a bid for transparency, community engagement, and innovation, have released many municipal datasets on publicly accessible open data portals. In recent years, advocates, reporters, and others have voiced concerns about the bias of algorithms used to guide public decisions and the data that power them.

Although significant progress is being made in developing tools for algorithmic bias and transparency, we could not find any standardized tools available for assessing bias in open data itself. In other words, how can policymakers, analysts, and advocates systematically measure the level of bias in the data that power city decisionmaking, whether an algorithm is used or not?

To fill this gap, we present a prototype of an automated bias assessment tool for geographic data. This new tool will allow city officials, concerned residents, and other stakeholders to quickly assess the bias and representativeness of their data. The tool allows users to upload a file with latitude and longitude coordinates and receive simple metrics of spatial and demographic bias across their city.

The tool is built on geographic and demographic data from the Census and assumes that the population distribution in a city represents the “ground truth” of the underlying distribution in the data uploaded. To provide an illustrative example of the tool’s use and output, we test our bias assessment on three datasets—bikeshare station locations, 311 service request locations, and Low Income Housing Tax Credit (LIHTC) building locations—across a few, hand-selected example cities….(More)”

Africa Data Revolution Report 2018


Report by Jean-Paul Van Belle et al: ” The Africa Data Revolution Report 2018 delves into the recent evolution and current state of open data – with an emphasis on Open Government Data – in the African data communities. It explores key countries across the continent, researches a wide range of open data initiatives, and benefits from global thematic expertise. This second edition improves on process, methodology and collaborative partnerships from the first edition.

It draws from country reports, existing global and continental initiatives, and key experts’ input, in order to provide a deep analysis of the
actual impact of open data in the African context. In particular, this report features a dedicated Open Data Barometer survey as well as a special 2018
Africa Open Data Index regional edition surveying the status and impact of open data and dataset availability in 30 African countries. The research is complemented with six in-depth qualitative case studies featuring the impact of open data in Kenya, South Africa (Cape Town), Ghana, Rwanda, Burkina Faso and Morocco. The report was critically reviewed by an eminent panel of experts.

Findings: In some governments, there is a slow iterative cycle between innovation, adoption, resistance and re-alignment before finally resulting in Open Government Data (OGD) institutionalization and eventual maturity. There is huge diversity between African governments in embracing open data, and each country presents a complex and unique picture. In several African countries, there appears to be genuine political will to open up government based datasets, not only for increased transparency but also to achieve economic impacts, social equity and stimulate innovation.

The role of open data intermediaries is crucial and has been insufficiently recognized in the African context. Open data in Africa needs a vibrant, dynamic, open and multi-tier data ecosystem if the datasets are to make a real impact. Citizens are rarely likely to access open data themselves. But the democratization of information and communication platforms has opened up opportunities among a large and diverse set of intermediaries to explore and combine relevant data sources, sometimes with private or leaked data. The news media, NGOs and advocacy groups, and to a much lesser extent academics and social or profit-driven entrepreneurs have shown that OGD can create real impact on the achievement of the SDGs…

The report encourages national policy makers and international funding or development agencies to consider the status, impact and future of open
data in Africa on the basis of this research. Other stakeholders working with or for open data can hopefully  also learn from what is happening on the continent. It is hoped that the findings and recommendations contained in the report will form the basis of a robust, informed and dynamic debate around open government data in Africa….(More)”.

EU Data Protection Rules and U.S. Implications


In Focus by the Congressional Research Service: “U.S. and European citizens are increasingly concerned about ensuring the protection of personal data, especially online. A string of high-profile data breaches at companies such as Facebook and Google have contributed to heightened public awareness. The European Union’s (EU) new General Data Protection Regulation (GDPR)—which took effect on May 25, 2018—has drawn the attention of U.S. businesses and other stakeholders, prompting debate on U.S. data privacy and protection policies.

Both the United States and the 28-member EU assert that they are committed to upholding individual privacy rights and ensuring the protection of personal data, including electronic data. However, data privacy and protection issues have long been sticking points in U.S.-EU economic and security relations, in part because of differences in U.S. and EU legal regimes and approaches to data privacy.

The GDPR highlights some of those differences and poses challenges for U.S. companies doing business in the EU. The United States does not broadly restrict cross-border data flows and has traditionally regulated privacy at a sectoral level to cover certain types of data. The EU considers the privacy of communications and the protection of personal data to be fundamental rights, which are codified in EU law. Europe’s history with fascist and totalitarian regimes informs the EU’s views on data protection and contributes to the demand for strict data privacy controls. The EU regards current U.S. data protection safeguards as inadequate; this has complicated the conclusion of U.S.-EU information-sharing agreements and raised concerns about U.S.-EU data flows….(More).

Data Trusts: Ethics, Architecture and Governance for Trustworthy Data Stewardship


Web Science Institute Paper by Kieron O’Hara: “In their report on the development of the UK AI industry, Wendy Hall and Jérôme Pesenti
recommend the establishment of data trusts, “proven and trusted frameworks and agreements” that will “ensure exchanges [of data] are secure and mutually beneficial” by promoting trust in the use of data for AI. Hall and Pesenti leave the structure of data trusts open, and the purpose of this paper is to explore the questions of (a) what existing structures can data trusts exploit, and (b) what relationship do data trusts have to
trusts as they are understood in law?

The paper defends the following thesis: A data trust works within the law to provide ethical, architectural and governance support for trustworthy data processing

Data trusts are therefore both constraining and liberating. They constrain: they respect current law, so they cannot render currently illegal actions legal. They are intended to increase trust, and so they will typically act as
further constraints on data processors, adding the constraints of trustworthiness to those of law. Yet they also liberate: if data processors
are perceived as trustworthy, they will get improved access to data.

Most work on data trusts has up to now focused on gaining and supporting the trust of data subjects in data processing. However, all actors involved in AI – data consumers, data providers and data subjects – have trust issues which data trusts need to address.

Furthermore, it is not only personal data that creates trust issues; the same may be true of any dataset whose release might involve an organisation risking competitive advantage. The paper addresses four areas….(More)”.

Harnessing the Power of Open Data for Children and Families


Article by Kathryn L.S. Pettit and Rob Pitingolo: “Child advocacy organizations, such as members of the KIDS COUNT network, have proven the value of using data to advocate for policies and programs to improve the lives of children and families. These organizations use data to educate policymakers and the public about how children are faring in their communities. They understand the importance of high-quality information for policy and decisionmaking. And in the past decade, many state governments have embraced the open data movement. Their data portals promote government transparency and increase data access for a wide range of users inside and outside government.

At the request of the Annie E. Casey Foundation, which funds the KIDS COUNT network, the authors conducted research to explore how these state data efforts could bring greater benefits to local communities. Interviews with child advocates and open data providers confirmed the opportunity for child advocacy organizations and state governments to leverage open data to improve the lives of children and families. But accomplishing this goal will require new practices on both sides.

This brief first describes the current state of practice for child advocates using data and for state governments publishing open data. It then provides suggestions for what it would take from both sides to increase the use of open data to improve the lives of children and families. Child and family advocates will find five action steps in section 2. These steps encourage them to assess their data needs, build relationships with state data managers, and advocate for new data and preservation of existing data.
State agency staff will find five action steps in section 3. These steps describe how staff can engage diverse stakeholders, including agency staff beyond typical “data people” and data users outside government. Although this brief focuses on state-level institutions, local advocates an governments will find these lessons relevant. In fact, many of the lessons and best practices are based on pioneering efforts at the local level….(More)”.