Against the Dehumanisation of Decision-Making – Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information


Paper by Guido Noto La Diega: “Nowadays algorithms can decide if one can get a loan, is allowed to cross a border, or must go to prison. Artificial intelligence techniques (natural language processing and machine learning in the first place) enable private and public decision-makers to analyse big data in order to build profiles, which are used to make decisions in an automated way.

This work presents ten arguments against algorithmic decision-making. These revolve around the concepts of ubiquitous discretionary interpretation, holistic intuition, algorithmic bias, the three black boxes, psychology of conformity, power of sanctions, civilising force of hypocrisy, pluralism, empathy, and technocracy.

The lack of transparency of the algorithmic decision-making process does not stem merely from the characteristics of the relevant techniques used, which can make it impossible to access the rationale of the decision. It depends also on the abuse of and overlap between intellectual property rights (the “legal black box”). In the US, nearly half a million patented inventions concern algorithms; more than 67% of the algorithm-related patents were issued over the last ten years and the trend is increasing.

To counter the increased monopolisation of algorithms by means of intellectual property rights (with trade secrets leading the way), this paper presents three legal routes that enable citizens to ‘open’ the algorithms.

First, copyright and patent exceptions, as well as trade secrets are discussed.

Second, the GDPR is critically assessed. In principle, data controllers are not allowed to use algorithms to take decisions that have legal effects on the data subject’s life or similarly significantly affect them. However, when they are allowed to do so, the data subject still has the right to obtain human intervention, to express their point of view, as well as to contest the decision. Additionally, the data controller shall provide meaningful information about the logic involved in the algorithmic decision.

Third, this paper critically analyses the first known case of a court using the access right under the freedom of information regime to grant an injunction to release the source code of the computer program that implements an algorithm.

Only an integrated approach – which takes into account intellectual property, data protection, and freedom of information – may provide the citizen affected by an algorithmic decision of an effective remedy as required by the Charter of Fundamental Rights of the EU and the European Convention on Human Rights….(More)”.

Who wants to know?: The Political Economy of Statistical Capacity in Latin America


IADB paper by Dargent, Eduardo; Lotta, Gabriela; Mejía-Guerra, José Antonio; Moncada, Gilberto: “Why is there such heterogenity in the level of technical and institutional capacity in national statistical offices (NSOs)? Although there is broad consensus about the importance of statistical information as an essential input for decision making in the public and private sectors, this does not generally translate into a recognition of the importance of the institutions responsible for the production of data. In the context of the role of NSOs in government and society, this study seeks to explain the variation in regional statistical capacity by comparing historical processes and political economy factors in 10 Latin American countries. To do so, it proposes a new theoretical and methodological framework and offers recommendations to strengthen the institutionality of NSOs….(More)”.

Preprints: The What, The Why, The How.


Center for Open Science: “The use of preprint servers by scholarly communities is definitely on the rise. Many developments in the past year indicate that preprints will be a huge part of the research landscape. Developments with DOIs, changes in funder expectations, and the launch of many new services indicate that preprints will become much more pervasive and reach beyond the communities where they started.

From funding agencies that want to realize impact from their efforts sooner to researchers’ desire to disseminate their research more quickly, the growth of these servers and the number of works being shared, has been substantial. At COS, we already host twenty different organizations’ services via the OSF Preprints platform.

So what’s a preprint and what is it good for? A preprint is a manuscript submitted to a  dedicated repository (like OSF PreprintsPeerJbioRxiv or arXiv) prior to peer review and formal publication. Some of those repositories may also accept other types of research outputs, like working papers and posters or conference proceedings. Getting a preprint out there has a variety of benefits for authors other stakeholders in the research:

  • They increase the visibility of research, and sooner. While traditional papers can languish in the peer review process for months, even years, a preprint is live the minute it is submitted and moderated (if the service moderates). This means your work gets indexed by Google Scholar and Altmetric, and discovered by more relevant readers than ever before.
  • You can get feedback on your work and make improvements prior to journal submission. Many authors have publicly commented about the recommendations for improvements they’ve received on their preprint that strengthened their work and even led to finding new collaborators.
  • Papers with an accompanying preprint get cited 30% more often than papers without. This research from PeerJsums it up, but that’s a big benefit for scholars looking to get more visibility and impact from their efforts.
  • Preprints get a permanent DOI, which makes them part of the freely accessible scientific record forever. This means others can relay on that permanence when citing your work in their research. It also means that your idea, developed by you, has a “stake in the ground” where potential scooping and intellectual theft are concerned.

So, preprints can really help lubricate scientific progress. But there are some things to keep in mind before you post. Usually, you can’t post a preprint of an article that’s already been submitted to a journal for peer review. Policies among journals vary widely, so it’s important to check with the journal you’re interested in sending your paper to BEFORE you submit a preprint that might later be published. A good resource for doing this is JISC’s SHERPA/RoMEO database. It’s also a good idea to understand the licensing choices available. At OSF Preprints, we recommend the CC-BY license suite, but you can check choosealicense.com or https://osf.io/6uupa/ for good overviews on how best to license your submissions….(More)”.

On Preferring A to B, While Also Preferring B to A


Paper by Cass R. Sunstein: “In important contexts, people prefer option A to option B when they evaluate the two separately, but prefer option B to option A when they evaluate the two jointly. In consumer behavior, politics, and law, such preference reversals present serious puzzles about rationality and behavioral biases.

They are often a product of the pervasive problem of “evaluability.” Some important characteristics of options are difficult or impossible to assess in separate evaluation, and hence choosers disregard or downplay them; those characteristics are much easier to assess in joint evaluation, where they might be decisive. But in joint evaluation, certain characteristics of options may receive excessive weight, because they do not much affect people’s actual experience or because the particular contrast between joint options distorts people’s judgments. In joint as well as separate evaluation, people are subject to manipulation, though for different reasons.

It follows that neither mode of evaluation is reliable. The appropriate approach will vary depending on the goal of the task – increasing consumer welfare, preventing discrimination, achieving optimal deterrence, or something else. Under appropriate circumstances, global evaluation would be much better, but it is often not feasible. These conclusions bear on preference reversals in law and policy, where joint evaluation is often better, but where separate evaluation might ensure that certain characteristics or features of situations do not receive excessive weight…(More)”.

Using Satellite Imagery to Revolutionize Creation of Tax Maps and Local Revenue Collection


World Bank Policy Research Paper by Daniel Ayalew Ali, Klaus Deininger and Michael Wild: “The technical complexity of ensuring that tax rolls are complete and valuations current is often perceived as a major barrier to bringing in more property tax revenues in developing countries.

This paper shows how high-resolution satellite imagery makes it possible to assess the completeness of existing tax maps by estimating built-up areas based on building heights and footprints. Together with information on sales prices from the land registry, targeted surveys, and routine statistical data, this makes it possible to use mass valuation procedures to generate tax maps. The example of Kigali illustrates the reliability of the method and the potentially far-reaching revenue impacts. Estimates show that heightened compliance and a move to a 1 percent ad valorem tax would yield a tenfold increase in revenue from public land….(More)”.

User Perceptions of Privacy in Smart Homes


Paper by Serena Zheng, Marshini Chetty, and Nick Feamster: “Despite the increasing presence of Internet of Things (IoT) devices inside the home, we know little about how users feel about their privacy living with Internet-connected devices that continuously monitor and collect data in their homes. To gain insight into this state of affairs, we conducted eleven semi-structured interviews with owners of smart homes, investigating privacy values and expectations.

In this paper, we present the findings that emerged from our study: First, users prioritize the convenience and connectedness of their smart homes, and these values dictate their privacy opinions and behaviors. Second, user opinions about who should have access to their smart home data depend on the perceived benefit. Third, users assume their privacy is protected because they trust the manufacturers of their IoT devices. Our findings bring up several implications for IoT privacy, which include the need for design for privacy and evaluation standards….(More)”.

Can Fact-checking Prevent Politicians from Lying?


Paper by Chloe Lim: “Journalists now regularly trumpet fact-checking as an important tool to hold politicians accountable for their public statements, but fact checking’s effect has only been assessed anecdotally and in experiments on politicians holding lower-level offices.

Using a rigorous research design to estimate the effects of fact-checking on presidential candidates, this paper shows that a fact-checker deeming a statement false false causes a 9.5 percentage points reduction in the probability that the candidate repeats the claim. To eliminate alternative explanations that could confound this estimate, I use two types of difference-in-differences analyses, each using true-rated claims and “checkable but unchecked” claims, a placebo test using hypothetical fact-check dates, and a topic model to condition on the topic of the candidate’s statement.

This paper contributes to the literature on how news media can hold politicians accountable, showing that when news organizations label a statement as inaccurate, they affect candidate behavior…(More)”.

Data Pollution


Paper by Omri Ben-Shahar: “Digital information is the fuel of the new economy. But like the old economy’s carbon fuel, it also pollutes. Harmful “data emissions” are leaked into the digital ecosystem, disrupting social institutions and public interests. This article develops a novel framework- data pollution-to rethink the harms the data economy creates and the way they have to be regulated. It argues that social intervention should focus on the external harms from collection and misuse of personal data. The article challenges the hegemony of the prevailing view-that the harm from digital data enterprise is to the privacy of the people whose information is used. It claims that a central problem has been largely ignored: how the information individuals give affects others, and how it undermines and degrade public goods and interests. The data pollution metaphor offers a novel perspective why existing regulatory tools-torts, contracts, and disclosure law-are ineffective, mirroring their historical futility in curbing the external social harms from environmental pollution. The data pollution framework also opens up a rich roadmap for new regulatory devices-an environmental law for dataprotection-that focus on controlling these external effects. The article examines whether the general tools society has long used to control industrial pollution-production restrictions, carbon tax, and emissions liability-could be adapted to govern data pollution….(More)”.

Livestreaming Pollution: A New Form of Public Disclosure and a Catalyst for Citizen Engagement?


NBER Working Paper by Emiliano Huet-Vaughn, Nicholas Muller, and Yen-Chia Hsu: “Most environmental policy assumes the form of standards and enforcement. Scarce public budgets motivate the use of disclosure laws. This study explores a new form of pollution disclosure: real-time visual evidence of emissions provided on a free, public website. The paper tests whether the disclosure of visual evidence of emissions affects the nature and frequency of phone calls to the local air quality regulator. First, we test whether the presence of the camera affects the frequency of calls to the local air quality regulator about the facility monitored by the camera. Second, we test the relationship between the camera being active and the number of complaints about facilities other than the plant recorded by the camera. Our empirical results suggest that the camera did not affect the frequency of calls to the regulator about the monitored facility. However, the count of complaints pertaining to another prominent industrial polluter in the area, steel manufacturing plants, is positively associated with the camera being active. We propose two behavioral reasons for this finding: the prior knowledge hypothesis and affect heuristics. This study argues that visual evidence is a feasible approach to environmental oversight even during periods with diminished regulatory capacity….(More)”.

A Rule of Persons, Not Machines: The Limits of Legal Automation


Paper by Frank A. Pasquale: “For many legal futurists, attorneys’ work is a prime target for automation. They view the legal practice of most businesses as algorithmic: data (such as facts) are transformed into outputs (agreements or litigation stances) via application of set rules. These technophiles promote substituting computer code for contracts and descriptions of facts now written by humans. They point to early successes in legal automation as proof of concept. TurboTax has helped millions of Americans file taxes, and algorithms have taken over certain aspects of stock trading. Corporate efforts to “formalize legal code” may bring new efficiencies in areas of practice characterized by both legal and factual clarity.

However, legal automation can also elide or exclude important human values, necessary improvisations, and irreducibly deliberative governance. Due process, appeals, and narratively intelligible explanation from persons, for persons, depend on forms of communication that are not reducible to software. Language is constitutive of these aspects of law. To preserve accountability and a humane legal order, these reasons must be expressed in language by a responsible person. This basic requirement for legitimacy limits legal automation in several contexts, including corporate compliance, property recordation, and contracting. A robust and ethical legal profession respects the flexibility and subtlety of legal language as a prerequisite for a just and accountable social order. It ensures a rule of persons, not machines…(More)”