We Need to Save Ignorance From AI


Christina Leuker and Wouter van den Bos in Nautilus:  “After the fall of the Berlin Wall, East German citizens were offered the chance to read the files kept on them by the Stasi, the much-feared Communist-era secret police service. To date, it is estimated that only 10 percent have taken the opportunity.

In 2007, James Watson, the co-discoverer of the structure of DNA, asked that he not be given any information about his APOE gene, one allele of which is a known risk factor for Alzheimer’s disease.

Most people tell pollsters that, given the choice, they would prefer not to know the date of their own death—or even the future dates of happy events.

Each of these is an example of willful ignorance. Socrates may have made the case that the unexamined life is not worth living, and Hobbes may have argued that curiosity is mankind’s primary passion, but many of our oldest stories actually describe the dangers of knowing too much. From Adam and Eve and the tree of knowledge to Prometheus stealing the secret of fire, they teach us that real-life decisions need to strike a delicate balance between choosing to know, and choosing not to.

But what if a technology came along that shifted this balance unpredictably, complicating how we make decisions about when to remain ignorant? That technology is here: It’s called artificial intelligence.

AI can find patterns and make inferences using relatively little data. Only a handful of Facebook likes are necessary to predict your personality, race, and gender, for example. Another computer algorithm claims it can distinguish between homosexual and heterosexual men with 81 percent accuracy, and homosexual and heterosexual women with 71 percent accuracy, based on their picture alone. An algorithm named COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) can predict criminal recidivism from data like juvenile arrests, criminal records in the family, education, social isolation, and leisure activities with 65 percent accuracy….

Recently, though, the psychologist Ralph Hertwig and legal scholar Christoph Engel have published an extensive taxonomy of motives for deliberate ignorance. They identified two sets of motives, in particular, that have a particular relevance to the need for ignorance in the face of AI.

The first set of motives revolves around impartiality and fairness. Simply put, knowledge can sometimes corrupt judgment, and we often choose to remain deliberately ignorant in response. For example, peer reviews of academic papers are usually anonymous. Insurance companies in most countries are not permitted to know all the details of their client’s health before they enroll; they only know general risk factors. This type of consideration is particularly relevant to AI, because AI can produce highly prejudicial information….(More)”.

Essentials of the Right of Access to Public Information: An Introduction


Introduction by Blanke, Hermann-Josef and Perlingeiro, Ricardo in the book “The Right of Access to Public Information : An International Comparative Legal Survey”: “The first freedom of information law was enacted in Sweden back in 1766 as the “Freedom of the Press and the Right of Access to Public Records Act”. It sets an example even today. However, the “triumph” of the freedom of information did not take place until much later. Many western legal systems arose from the American Freedom of Information Act, which was signed into law by President L.B. Johnson in 1966. This Act obliges all administrative authorities to provide information to citizens and imposes any necessary limitations. In an exemplary manner, it standardizes the objective of administrative control to protect citizens from government interference with their fundamental rights. Over 100 countries around the world have meanwhile implemented some form of freedom of information legislation. The importance of the right of access to information as an aspect of transparency and a condition for the rule of law and democracy is now also becoming apparent in international treaties at a regional level. This article provides an overview on the crucial elements and the guiding legal principles of transparency legislation, also by tracing back the lines of development of national and international case-law….(More)”.

Personal Data v. Big Data: Challenges of Commodification of Personal Data


Maria Bottis and  George Bouchagiar in the Open Journal of Philosophy: “Any firm today may, at little or no cost, build its own infrastructure to process personal data for commercial, economic, political, technological or any other purposes. Society has, therefore, turned into a privacy-unfriendly environment. The processing of personal data is essential for multiple economically and socially useful purposes, such as health care, education or terrorism prevention. But firms view personal data as a commodity, as a valuable asset, and heavily invest in processing for private gains. This article studies the potential to subject personal data to trade secret rules, so as to ensure the users’ control over their data without limiting the data’s free movement, and examines some positive scenarios of attributing commercial value to personal data….(More)”.

Mapping Puerto Rico’s Hurricane Migration With Mobile Phone Data


Martin Echenique and Luis Melgar at CityLab: “It is well known that the U.S. Census Bureau keeps track of state-to-state migration flows. But that’s not the case with Puerto Rico. Most of the publicly known numbers related to the post-Maria diaspora from the island to the continental U.S. were driven by estimates, and neither state nor federal institutions kept track of how many Puerto Ricans have left (or returned) after the storm ravaged the entire territory last September.

But Teralytics, a New York-based tech company with offices in Zurich and Singapore, has developed a map that reflects exactly how, when, and where Puerto Ricans have moved between August 2017 and February 2018. They did it by tracking data that was harvested from a sample of nearly 500,000 smartphones in partnership with one major undisclosed U.S. cell phone carrier….

The usefulness of this kind of geo-referenced data is clear in disaster relief efforts, especially when it comes to developing accurate emergency planning and determining when and where the affected population is moving.

“Generally speaking, people have their phones with them the entire time. This tells you where people are, where they’re going to, coming from, and movement patterns,” said Steven Bellovin, a computer science professor at Columbia University and former chief technologist for the U.S. Federal Trade Commission. “It could be very useful for disaster-relief efforts.”…(More)”.

Against the Dehumanisation of Decision-Making – Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information


Paper by Guido Noto La Diega: “Nowadays algorithms can decide if one can get a loan, is allowed to cross a border, or must go to prison. Artificial intelligence techniques (natural language processing and machine learning in the first place) enable private and public decision-makers to analyse big data in order to build profiles, which are used to make decisions in an automated way.

This work presents ten arguments against algorithmic decision-making. These revolve around the concepts of ubiquitous discretionary interpretation, holistic intuition, algorithmic bias, the three black boxes, psychology of conformity, power of sanctions, civilising force of hypocrisy, pluralism, empathy, and technocracy.

The lack of transparency of the algorithmic decision-making process does not stem merely from the characteristics of the relevant techniques used, which can make it impossible to access the rationale of the decision. It depends also on the abuse of and overlap between intellectual property rights (the “legal black box”). In the US, nearly half a million patented inventions concern algorithms; more than 67% of the algorithm-related patents were issued over the last ten years and the trend is increasing.

To counter the increased monopolisation of algorithms by means of intellectual property rights (with trade secrets leading the way), this paper presents three legal routes that enable citizens to ‘open’ the algorithms.

First, copyright and patent exceptions, as well as trade secrets are discussed.

Second, the GDPR is critically assessed. In principle, data controllers are not allowed to use algorithms to take decisions that have legal effects on the data subject’s life or similarly significantly affect them. However, when they are allowed to do so, the data subject still has the right to obtain human intervention, to express their point of view, as well as to contest the decision. Additionally, the data controller shall provide meaningful information about the logic involved in the algorithmic decision.

Third, this paper critically analyses the first known case of a court using the access right under the freedom of information regime to grant an injunction to release the source code of the computer program that implements an algorithm.

Only an integrated approach – which takes into account intellectual property, data protection, and freedom of information – may provide the citizen affected by an algorithmic decision of an effective remedy as required by the Charter of Fundamental Rights of the EU and the European Convention on Human Rights….(More)”.

Ghost Cities: Built but Never Inhabited


Civic Data Design Lab at UrbanNext: “Ghost Cities are vacant neighborhoods and sometimes whole cities that were built but were never inhabited. Their existence is a physical manifestation of Chinese overdevelopment in real estate and the dependence on housing as an investment strategy. Little data exists which establishes the location and extent of these Ghost Cities in China. MIT’s Civic Data Design Lab developed a model using data scraped from Chinese social media sites and Baidu (Chinese Google Maps) to create one of the first maps identifying the locations of Chinese Ghost Cities….

Quantifying the extent and location of Ghost Cities is complicated by the fact that the Chinese government keeps a tight hold on data about sales and occupancy of buildings. Even local planners may have a hard time acquiring it. The Civic Data Design Lab developed a model to identify Ghost Cities based on the idea that amenities (grocery stores, hair salons, restaurants, schools, retail, etc.) are the mark of a healthy community and the lack of amenities might indicate locations where no one lives. Given the lack of openly available data in China, data was scraped from Chinese social media and websites, including Dianping (Chinese Yelp), Amap (Chinese Map Quest), Fang (Chinese Zillow), and Baidu (Chinese Google Maps) using openly accessible Application Programming Interfaces(APIs). 

Using data scraped from social media sites in Chengdu and Shenyang, the model was tested using 300 m x 300 m grid cells marking residential locations. Each grid cell was given an amenity accessibility score based on the distance and clustering of amenities nearby. Residential areas that had a cluster of low scores were marked as Ghost Cities. The results were ground-truthed through site visits documenting the location using aerial photography from drones and interviews with local stakeholders.

The model worked well at documenting under-utilized residential locations in these Chinese cities, picking up everything from vacant housing and stalled construction to abandoned older residential locations, creating the first data set that marks risk in the Chinese real estate market. The research shows that data available through social media can help locate and estimate risk in the Chinese real estate market. Perhaps more importantly, however, identifying where these areas are concentrated can help city planners, developers and local citizens make better investment decisions and address the risk created by these under-utilized developments….(More)”.

Can crowdsourcing scale fact-checking up, up, up? Probably not, and here’s why


Mevan Babakar at NiemanLab: “We foolishly thought that harnessing the crowd was going to require fewer human resources, when in fact it required, at least at the micro level, more.”….There’s no end to the need for fact-checking, but fact-checking teams are usually small and struggle to keep up with the demand. In recent months, organizations like WikiTribune have suggested crowdsourcing as an attractive, low-cost way that fact-checking could scale.

As the head of automated fact-checking at the U.K.’s independent fact-checking organization Full Fact, I’ve had a lot of time to think about these suggestions, and I don’t believe that crowdsourcing can solve the fact-checking bottleneck. It might even make it worse. But — as two notable attempts, TruthSquad and FactcheckEU, have shown — even if crowdsourcing can’t help scale the core business of fact checking, it could help streamline activities that take place around it.

Think of crowdsourced fact-checking as including three components: speed (how quickly the task can be done), complexity (how difficult the task is to perform; how much oversight it needs), and coverage (the number of topics or areas that can be covered). You can optimize for (at most) two of these at a time; the third has to be sacrificed.

High-profile examples of crowdsourcing like Wikipedia, Quora, and Stack Overflow harness and gather collective knowledge, and have proven that large crowds can be used in meaningful ways for complex tasks across many topics. But the tradeoff is speed.

Projects like Gender Balance (which asks users to identify the gender of politicians) and Democracy Club Candidates (which crowdsources information about election candidates) have shown that small crowds can have a big effect when it comes to simple tasks, done quickly. But the tradeoff is broad coverage.

At Full Fact, during the 2015 U.K. general election, we had 120 volunteers aid our media monitoring operation. They looked through the entire media output every day and extracted the claims being made. The tradeoff here was that the task wasn’t very complex (it didn’t need oversight, and we only had to do a few spot checks).

But we do have two examples of projects that have operated at both high levels of complexity, within short timeframes, and across broad areas: TruthSquad and FactCheckEU….(More)”.

When Technology Gets Ahead of Society


Tarun Khanna at Harvard Business Review: “Drones, originally developed for military purposes, weren’t approved for commercial use in the United States until 2013. When that happened, it was immediately clear that they could be hugely useful to a whole host of industries—and almost as quickly, it became clear that regulation would be a problem. The new technology raised multiple safety and security issues, there was no consensus on who should write rules to mitigate those concerns, and the knowledge needed to develop the rules didn’t yet exist in many cases. In addition, the little flying robots made a lot of people nervous.

Such regulatory, logistical, and social barriers to adopting novel products and services are very common. In fact, technology routinely outstrips society’s ability to deal with it. That’s partly because tech entrepreneurs are often insouciant about the legal and social issues their innovations birth. Although electric cars are subsidized by the federal government, Tesla has run afoul of state and local regulations because it bypasses conventional dealers to sell directly to consumers. Facebook is only now facing up to major regulatory concerns about its use of data, despite being massively successful with users and advertisers.

It’s clear that even as innovations bring unprecedented comfort and convenience, they also threaten old ways of regulating industries, running a business, and making a living. This has always been true. Thus early cars weren’t allowed to go faster than horses, and some 19th-century textile workers used sledgehammers to attack the industrial machinery they feared would displace them. New technology can even upend social norms: Consider how dating apps have transformed the way people meet.

Entrepreneurs, of course, don’t really care that the problems they’re running into are part of a historical pattern. They want to know how they can manage—and shorten—the period between the advent of a technology and the emergence of the rules and new behaviors that allow society to embrace its possibilities.

Interestingly, the same institutional murkiness that pervades nascent industries such as drones and driverless cars is something I’ve also seen in developing countries. And strange though this may sound, I believe that tech entrepreneurs can learn a lot from businesspeople who have succeeded in the world’s emerging markets.

Entrepreneurs in Brazil or Nigeria know that it’s pointless to wait for the government to provide the institutional and market infrastructure their businesses need, because that will simply take too long. They themselves must build support structures to compensate for what Krishna Palepu and I have referred to in earlier writings as “institutional voids.” They must create the conditions that will allow them to create successful products or services.

Tech-forward entrepreneurs in developed economies may want to believe that it’s not their job to guide policy makers and the public—but the truth is that nobody else can play that role. They may favor hardball tactics, getting ahead by evading rules, co-opting regulators, or threatening to move overseas. But in the long term, they’d be wiser to use soft power, working with a range of partners to co-create the social and institutional fabric that will support their growth—as entrepreneurs in emerging markets have done.…(More)”.

Developing an impact framework for cultural change in government


Jesper Christiansen at Nesta: “Innovation teams and labs around the world are increasingly being tasked with building capacity and contributing to cultural change in government. There’s also an increasing recognition that we need to go beyond projects or single structures and make innovation become a part of the way governments operate more broadly.

However, there is a significant gap in our understanding of what “cultural change” or better “capacity” actually means.

At the same time, most innovation labs and teams are still being held to account in ways that don’t productively support this work. There is a lack of useful ways to measure outcomes, as opposed to outputs (for example, being asked to account for the number of workshops, rather than the increased capacity or impact that these workshops led to).

Consequently, we need a more developed awareness and understanding of what the signs of success look like, and what the intermediary outcomes (and measures) are in order to create a shift in accountability and better support ongoing capacity building….

One of the goals of States of Change, the collective we initiated last year to build this capability and culture, is to proactively address the common challenges that innovation practitioners face again and again. The field of public innovation is still emerging and evolving, and so our aim is to inspire action through practice-oriented, collaborative R&D activities and to develop the field based on practice rather than theory….(More)”.

Who wants to know?: The Political Economy of Statistical Capacity in Latin America


IADB paper by Dargent, Eduardo; Lotta, Gabriela; Mejía-Guerra, José Antonio; Moncada, Gilberto: “Why is there such heterogenity in the level of technical and institutional capacity in national statistical offices (NSOs)? Although there is broad consensus about the importance of statistical information as an essential input for decision making in the public and private sectors, this does not generally translate into a recognition of the importance of the institutions responsible for the production of data. In the context of the role of NSOs in government and society, this study seeks to explain the variation in regional statistical capacity by comparing historical processes and political economy factors in 10 Latin American countries. To do so, it proposes a new theoretical and methodological framework and offers recommendations to strengthen the institutionality of NSOs….(More)”.