Paper by Mara Maretti, Vanessa Russo & Emiliano del Gobbo: “The expression ‘open data’ relates to a system of informative and freely accessible databases that public administrations make generally available online in order to develop an informative network between institutions, enterprises and citizens. On this topic, using the semantic network analysis method, the research aims to investigate the communication structure and the governance of open data in the Twitter conversational environment. In particular, the research questions are: (1) Who are the main actors in the Italian open data infrastructure? (2) What are the main conversation topics online? (3) What are the pros and cons of the development and use (reuse) of open data in Italy? To answer these questions, we went through three research phases: (1) analysing the communication network, we found who are the main influencers; (2) once we found who were the main actors, we analysed the online content in the Twittersphere to detect the semantic areas; (3) then, through an online focus group with the main open data influencers, we explored the characteristics of Italian open data governance. Through the research, it has been shown that: (1) there is an Italian open data governance strategy; (2) the Italian civic hacker community plays an important role as an influencer; but (3) there are weaknesses in governance and in practical reuse….(More)”.
From (Horizontal and Sectoral) Data Access Solutions Towards Data Governance Systems
Paper by Wolfgang Kerber: “Starting with the assumption that under certain conditions also mandatory solutions for access to privately held data can be necessary, this paper analyses the legal and regulatory instruments for the implementation of such data access solutions. After an analysis of advantages and problems of horizontal versus sectoral access solutions, the main thesis of this paper is that focusing only on data access solutions is often not enough for achieving the desired positive effects on competition and innovation. An analysis of the two examples access to bank account data (PSD2: Second Payment Service Directive) and access to data of the connected car shows that successful data access solutions might require an entire package of additional complementary regulatory solutions (e.g. regarding interoperability, standardisation, and safety and security), and therefore the analysis and regulatory design of entire data governance systems (based upon an economic market failure analysis). In the last part important instruments that can be used within data governance systems are discussed, like, e.g. data trustee solutions….(More)”.
Demystifying the Role of Data Interoperability in the Access and Sharing Debate
Paper by Jörg Hoffmann and Begoña Gonzalez Otero: “In the current data access and sharing debate, data interoperability is widely proclaimed as being key for efficiently reaping the economic welfare enhancing effects of further data re-use. Although, we agree, we found that the current law and policy framework pertaining data interoperability was missing a groundworks analysis. Without a clear understanding of the notions of interoperability, the role of data standards and application programming interfaces (APIs) to achieve this ambition, and the IP and trade secrets protection potentially hindering it, any regulatory analysis within the data access discussion will be incomplete. Any attempt at untangling the role of data interoperability in the access and sharing regimes requires a thorough understanding of the underlying technology and a common understanding of the different notions of data interoperability.
The paper firstly explains the technical complexity of interoperability and its enablers, namely data standards and application programming interfaces. It elaborates on the reasons data interoperability counts with different levels and puts emphasis on the fact that data interoperability is indirectly tangled to the data access right. Since data interoperability may be part of the legal obligations correlating to the access right, the scope of interoperability is and has already been subject to courts’ interpretation. While this may give some manoeuvre for balanced decision-making, it may not guarantee the ambition of efficient re-usability of data. This is why data governance market regulation under a public law approach is becoming more favourable. Yet, and this is elaborated in a second step, the paper builds on the assumption that interoperability should not become another policy on its own. This is followed by a competition economics assessment, taking into account that data interoperability is always a matter of degree and a lack of data interoperability does not necessarily lead to a market foreclosure of competitors and to causing harm to consumer welfare. Additionally, parts of application programming interfaces (APIs) may be protected under IP rights and trade secrets, which might conflict with data access rights. Instead of further solving the conflicting regimes within the respective legal regimes of the exclusive rights the paper concludes by suggesting that (sector-specific) data governance solutions should deal with this issue and align the different interests implied. This may provide for better, practical and well-balanced solutions instead of impractical and dysfunctional exceptions and limitations within the IP and trade secrets regimes….(More)”.
The Expertise Curse: How Policy Expertise Can Hinder Responsiveness
Report by Miguel Pereira and Patrik Öhberg: “We argue that policy expertise may constrain the ability of politicians to be responsive. Legislators with more knowledge and experience in a given policy area have more confidence in their own issue-specific positions. Enhanced confidence, in turn, may lead legislators to discount opinions they disagree with. Two experiments with Swedish politicians support our argument. First, we find that officials with more expertise in a given domain are more likely to dismiss appeals from voters who hold contrasting opinions, regardless of their specific position on the policy, and less likely to accept that opposing views may represent the majority opinion. Consistent with the proposed mechanism, in a second experiment we show that inducing perceptions of expertise increases self-confidence. The results suggest that representatives with more expertise in a given area are paradoxically less capable of voicing public preferences in that domain. The study provides a novel explanation for distortions in policy responsiveness….(More)”
Evaluating the fake news problem at the scale of the information ecosystem
Paper by Jennifer Allen, Baird Howland, Markus Mobius, David Rothschild and Duncan J. Watts: “Fake news,” broadly defined as false or misleading information masquerading as legitimate news, is frequently asserted to be pervasive online with serious consequences for democracy. Using a unique multimode dataset that comprises a nationally representative sample of mobile, desktop, and television consumption, we refute this conventional wisdom on three levels. First, news consumption of any sort is heavily outweighed by other forms of media consumption, comprising at most 14.2% of Americans’ daily media diets. Second, to the extent that Americans do consume news, it is overwhelmingly from television, which accounts for roughly five times as much as news consumption as online. Third, fake news comprises only 0.15% of Americans’ daily media diet. Our results suggest that the origins of public misinformedness and polarization are more likely to lie in the content of ordinary news or the avoidance of news altogether as they are in overt fakery….(More)”.
Behavioral nudges reduce failure to appear for court
Paper by Alissa Fishbane, Aurelie Ouss and Anuj K. Shah: “Each year, millions of Americans fail to appear in court for low-level offenses, and warrants are then issued for their arrest. In two field studies in New York City, we make critical information salient by redesigning the summons form and providing text message reminders. These interventions reduce failures to appear by 13-21% and lead to 30,000 fewer arrest warrants over a 3-year period. In lab experiments, we find that while criminal justice professionals see failures to appear as relatively unintentional, laypeople believe they are more intentional. These lay beliefs reduce support for policies that make court information salient and increase support for punishment. Our findings suggest that criminal justice policies can be made more effective and humane by anticipating human error in unintentional offenses….(More)”
Public value and platform governance
UCL Institute for Innovation and Public Purpose (IIPP) Working Paper: “The market size and strength of the major digital platform companies has invited international concern about how such firms should best be regulated to serve the interests of wider society, with a particular emphasis on the need for new anti-trust legislation. Using a normative innovation systems approach, this paper investigates how current anti-trust models may insufficiently address the value-extracting features of existing data-intensive and platform-oriented industry behaviour and business models. To do so, we employ the concept of economic rents to investigate how digital platforms create and extract value. Two forms of rent are elaborated: ‘network monopoly rents’ and ‘algorithmic rents’. By identifying such rents more precisely, policymakers and researchers can better direct regulatory investigations, as well as broader industrial and innovation policy approaches, to shape the features of platform-driven digital markets…(More)”.
Institutional Change and Institutional Persistence
Paper by Daron Acemoglu, Georgy Egorov, and Konstantin Sonin: “In this essay, we provide a simple conceptual framework to elucidate the forces that lead to institutional persistence and change. Our framework is based on a dynamic game between different groups, who care both about current policies and institutions and future policies, which are themselves determined by current institutional choices, and clarifies the forces that lead to the most extreme form of institutional persistence (“institutional stasis”) and the potential drivers of institutional change. We further study the strategic stability of institutions, which arises when institutions persist because of fear of subsequent, less beneficial changes that would follow initial reforms. More importantly, we emphasize that, despite the popularity of ideas based on institutional stasis in the economics and political science literatures, most institutions are in a constant state of flux, but their trajectory may still be shaped by past institutional choices, thus exhibiting “path-dependent change”, so that initial conditions determine both the subsequent trajectories of institutions and how they respond to shocks. We conclude the essay by discussing how institutions can be designed to bolster stability, the relationship between social mobility and institutions, and the interplay between culture and institutions….(More)”
Lessons learned from AI ethics principles for future actions
Paper by Merve Hickok: “As the use of artificial intelligence (AI) systems became significantly more prevalent in recent years, the concerns on how these systems collect, use and process big data also increased. To address these concerns and advocate for ethical and responsible development and implementation of AI, non-governmental organizations (NGOs), research centers, private companies, and governmental agencies published more than 100 AI ethics principles and guidelines. This first wave was followed by a series of suggested frameworks, tools, and checklists that attempt a technical fix to issues brought up in the high-level principles. Principles are important to create a common understanding for priorities and are the groundwork for future governance and opportunities for innovation. However, a review of these documents based on their country of origin and funding entities shows that private companies from US-West axis dominate the conversation. Several cases surfaced in the meantime which demonstrate biased algorithms and their impact on individuals and society. The field of AI ethics is urgently calling for tangible action to move from high-level abstractions and conceptual arguments towards applying ethics in practice and creating accountability mechanisms. However, lessons must be learned from the shortcomings of AI ethics principles to ensure the future investments, collaborations, standards, codes or legislation reflect the diversity of voices and incorporate the experiences of those who are already impacted by the biased algorithms….(More)”.
Structuring Techlaw
Paper by Rebecca Crootof and BJ Ard: “Technological breakthroughs challenge core legal assumptions and generate regulatory debates. Practitioners and scholars usually tackle these questions by examining the impacts of a particular technology within conventional legal subjects — say, by considering how drones should be regulated under privacy law, property law, or the law of armed conflict. While individually useful, these siloed analyses mask the repetitive nature of the underlying questions and necessitate the regular reinvention of the regulatory wheel. An overarching framework — one which can be employed across technologies and across subjects — is needed.
The fundamental challenge of tech-law is not how to best regulate novel technologies, but rather how to best address familiar forms of uncertainty in new contexts. Accordingly, we construct a three-part framework, designed to encourage a more thoughtful resolution of tech-law questions. It:
(1) delineates the three types of tech-fostered legal uncertainty, which facilitates recognizing common issues;
(2) requires a considered selection between permissive and precautionary approaches to technological regulation, given their differing distributive consequences; and
(3) highlights tech-law-specific considerations when extending extant law, creating new law, or reassessing a legal regime.
This structure emphasizes the possibility of considered and purposeful intervention in the iterative and co-constructive relationship between law and technology. By making it easier to learn from the rich history of prior dilemmas and to anticipate future issues, this framework enables policymakers, judges, and other legal actors to make more just and effective regulatory decisions going forward…(More)”.