This Is Not an Atlas.


Book by kollektiv orangotango: “This Is Not an Atlas gathers more than 40 counter-cartographies from all over the world. This collection shows how maps are created and transformed as a part of political struggle, for critical research or in art and education: from indigenous territories in the Amazon to the anti-eviction movement in San Francisco; from defending commons in Mexico to mapping refugee camps with balloons in Lebanon; from slums in Nairobi to squats in Berlin; from supporting communities in the Philippines to reporting sexual harassment in Cairo. This Is Not an Atlas seeks to inspire, to document the underrepresented, and to be a useful companion when becoming a counter-cartographer yourself….(More)”.

“Anonymous” Data Won’t Protect Your Identity


Sophie Bushwick at Scientific American: “The world produces roughly 2.5 quintillion bytes of digital data per day, adding to a sea of information that includes intimate details about many individuals’ health and habits. To protect privacy, data brokers must anonymize such records before sharing them with researchers and marketers. But a new study finds it is relatively easy to reidentify a person from a supposedly anonymized data set—even when that set is incomplete.

Massive data repositories can reveal trends that teach medical researchers about disease, demonstrate issues such as the effects of income inequality, coach artificial intelligence into humanlike behavior and, of course, aim advertising more efficiently. To shield people who—wittingly or not—contribute personal information to these digital storehouses, most brokers send their data through a process of deidentification. This procedure involves removing obvious markers, including names and social security numbers, and sometimes taking other precautions, such as introducing random “noise” data to the collection or replacing specific details with general ones (for example, swapping a birth date of “March 7, 1990” for “January–April 1990”). The brokers then release or sell a portion of this information.

“Data anonymization is basically how, for the past 25 years, we’ve been using data for statistical purposes and research while preserving people’s privacy,” says Yves-Alexandre de Montjoye, an assistant professor of computational privacy at Imperial College London and co-author of the new study, published this week in Nature Communications.  Many commonly used anonymization techniques, however, originated in the 1990s, before the Internet’s rapid development made it possible to collect such an enormous amount of detail about things such as an individual’s health, finances, and shopping and browsing habits. This discrepancy has made it relatively easy to connect an anonymous line of data to a specific person: if a private detective is searching for someone in New York City and knows the subject is male, is 30 to 35 years old and has diabetes, the sleuth would not be able to deduce the man’s name—but could likely do so quite easily if he or she also knows the target’s birthday, number of children, zip code, employer and car model….(More)”

Battling Information Illiteracy


Article by Paul T. Jaeger and Natalie Greene Taylor on “How misinformation affects the future of policy…“California wildfires are being magnified and made so much worse by the bad environmental laws which aren’t allowing massive amounts of readily available water to be properly utilized. It is being diverted into the Pacific Ocean. Must also tree clear to stop fire from spreading!”

This tweet was a statement by a US president about a major event, suggesting changes to existing policies. It is also not true. Every element of the tweet—other than the existence of California, the Pacific Ocean, and wildfires—is false. And it was not a simple misunderstanding, because a tweet from Trump the next day reiterated these themes and blamed the state’s governor personally for holding back water to fight the fires.

So how does this pertain to information policy, since the tweet is about environmental policy issues? The answer is in the information. The use and misuse of information in governance and policymaking may be turning into the biggest information policy issue of all. And as technologies and methods of communication evolve, a large part of engaging with and advocating for information policy will consist of addressing the new challenges of teaching information literacy and behavior.

Misinformation literacy

The internet has made it easy for people to be information illiterate in new ways. Anyone can create information now—regardless of quality—and get it in front of a large number of people. The ability of social media to spread information as fast as possible, and to as many people as possible, challenges literacy, as does the ability to manipulate images, sounds, and video with ease….(More)”

The internet is rotting – let’s embrace it


Viktor Mayer-Schönberger in The Conversation: “Every year, some thousands of sites – including ones with unique information – go offline. Countless further webpages become inaccessible; instead of information, users encounter error messages.

Where some commentators may lament yet another black hole in the slowly rotting Internet, I actually feel okay. Of course, I, too, dread broken links and dead servers. But I also know: Forgetting is important.

In fact, as I argued in my book, “Delete: The Virtue of Forgetting in the Digital Age,” all through human history, humans reserved remembering for the things that really mattered to them and forgot the rest. Now the internet is making forgetting a lot harder.

Built to forget

Humans are accustomed to a world in which forgetting is the norm, and remembering is the exception.

This isn’t necessarily a bug in human evolution. The mind forgets what is no longer relevant to our present. Human memory is constantly reconstructed – it isn’t preserved in pristine condition, but becomes altered over time, helping people overcome cognitive dissonances. For example, people may see an awful past as rosier than it was, or devalue memories of past conflict with a person with whom they are close in the present.

Forgetting also helps humans to focus on current issues and to plan for the future. Research shows that those who are too tethered to their past find it difficult to live and act in the present. Forgetting creates space for something new, enabling people to go beyond what they already know.

Organizations that remember too much ossify in their processes and behavior. Learning something new requires forgetting something old – and that is hard for organizations that remember too much. There’s a growing literature on the importance of “unlearning,” or deliberately purging deeply rooted processes or practices from an organization – a fancy way to say that forgetting fulfills a valuable purpose….(More)”.

The value of data in Canada: Experimental estimates


Statistics Canada: “As data and information take on a far more prominent role in Canada and, indeed, all over the world, data, databases and data science have become a staple of modern life. When the electricity goes out, Canadians are as much in search of their data feed as they are food and heat. Consumers are using more and more data that is embodied in the products they buy, whether those products are music, reading material, cars and other appliances, or a wide range of other goods and services. Manufacturers, merchants and other businesses depend increasingly on the collection, processing and analysis of data to make their production processes more efficient and to drive their marketing strategies.

The increasing use of and investment in all things data is driving economic growth, changing the employment landscape and reshaping how and from where we buy and sell goods. Yet the rapid rise in the use and importance of data is not well measured in the existing statistical system. Given the ‘lack of data on data’, Statistics Canada has initiated new research to produce a first set of estimates of the value of data, databases and data science. The development of these estimates benefited from collaboration with the Bureau of Economic Analysis in the United States and the Organisation for Economic Co-operation and Development.

In 2018, Canadian investment in data, databases and data science was estimated to be as high as $40 billion. This was greater than the annual investment in industrial machinery, transportation equipment, and research and development and represented approximately 12% of total non-residential investment in 2018….

Statistics Canada recently released a conceptual framework outlining how one might measure the economic value of data, databases and data science. Thanks to this new framework, the growing role of data in Canada can be measured through time. This framework is described in a paper that was released in The Daily on June 24, 2019 entitled “Measuring investments in data, databases and data science: Conceptual framework.” That paper describes the concept of an ‘information chain’ in which data are derived from everyday observations, databases are constructed from data, and data science creates new knowledge by analyzing the contents of databases….(More)”.

E-Nudging Justice: The Role of Digital Choice Architecture in Online Courts


Paper by Ayelet Sela: “Justice systems around the world are launching online courts and tribunals in order to improve access to justice, especially for self-represented litigants (SRLs). Online courts are designed to handhold SRLs throughout the process and empower them to make procedural and substantive decisions. To that end, they present SRLs with streamlined and simplified procedures and employ a host of user interface design and user experience strategies (UI/UX). Focusing on these features, the article analyzes online courts as digital choice environments that shape SRLs’ decisions, inputs and actions, and considers their implications on access to justice, due process and the impartiality of courts. Accordingly, the article begins to close the knowledge gap regarding choice architecture in online legal proceedings. 

Using examples from current online courts, the article considers how mechanisms such as choice overload, display, colorfulness, visual complexity, and personalization influence SRLs’ choices and actions. The analysis builds on research in cognitive psychology and behavioral economics that shows that subtle changes in the context in which decisions are made steer (nudge) people to choose a particular option or course of action. It is also informed by recent studies that capture the effect of digital choice architecture on users’ choices and behaviors in online settings. The discussion clarifies that seemingly naïve UI/UX features can strongly influence users of online courts, in a manner that may be at odds with their institutional commitment to impartiality and due process. Moreover, the article challenges the view that online court interfaces (and those of other online legal services, for that matter) should be designed to maximize navigability, intuitiveness and user-friendliness. It argues that these design attributes involve the risk of nudging SRLs to make uninformed, non-deliberate, and biased decisions, possibly infringing their autonomy and self-determination. Accordingly, the article suggests that choice architecture in online courts should aim to encourage reflective participation and informed decision-making. Specifically, its goal should be to improve SRLs’ ability to identify and consider options, and advance their own — inherently diverse — interests. In order to mitigate the abovementioned risks, the article proposes an initial evaluation framework, measures, and methodologies to support evidence-based and ethical choice architecture in online courts….(More)”.

The Impact of Citizen Environmental Science in the United States


Paper by George Wyeth, Lee C. Paddock, Alison Parker, Robert L. Glicksman and Jecoliah Williams: “An increasingly sophisticated public, rapid changes in monitoring technology, the ability to process large volumes of data, and social media are increasing the capacity for members of the public and advocacy groups to gather, interpret, and exchange environmental data. This development has the potential to alter the government-centric approach to environmental governance; however, citizen science has had a mixed record in influencing government decisions and actions. This Article reviews the rapid changes that are going on in the field of citizen science and examines what makes citizen science initiatives impactful, as well as the barriers to greater impact. It reports on 10 case studies, and evaluates these to provide findings about the state of citizen science and recommendations on what might be done to increase its influence on environmental decisionmaking….(More)”,

How we can place a value on health care data


Report by E&Y: “Unlocking the power of health care data to fuel innovation in medical research and improve patient care is at the heart of today’s health care revolution. When curated or consolidated into a single longitudinal dataset, patient-level records will trace a complete story of a patient’s demographics, health, wellness, diagnosis, treatments, medical procedures and outcomes. Health care providers need to recognize patient data for what it is: a valuable intangible asset desired by multiple stakeholders, a treasure trove of information.

Among the universe of providers holding significant data assets, the United Kingdom’s National Health Service (NHS) is the single largest integrated health care provider in the world. Its patient records cover the entire UK population from birth to death.

We estimate that the 55 million patient records held by the NHS today may have an indicative market value of several billion pounds to a commercial organization. We estimate also that the value of the curated NHS dataset could be as much as £5bn per annum and deliver around £4.6bn of benefit to patients per annum, in potential operational savings for the NHS, enhanced patient outcomes and generation of wider economic benefits to the UK….(More)”.

The Hidden Costs of Automated Thinking


Jonathan Zittrain in The New Yorker: “Like many medications, the wakefulness drug modafinil, which is marketed under the trade name Provigil, comes with a small, tightly folded paper pamphlet. For the most part, its contents—lists of instructions and precautions, a diagram of the drug’s molecular structure—make for anodyne reading. The subsection called “Mechanism of Action,” however, contains a sentence that might induce sleeplessness by itself: “The mechanism(s) through which modafinil promotes wakefulness is unknown.”

Provigil isn’t uniquely mysterious. Many drugs receive regulatory approval, and are widely prescribed, even though no one knows exactly how they work. This mystery is built into the process of drug discovery, which often proceeds by trial and error. Each year, any number of new substances are tested in cultured cells or animals; the best and safest of those are tried out in people. In some cases, the success of a drug promptly inspires new research that ends up explaining how it works—but not always. Aspirin was discovered in 1897, and yet no one convincingly explained how it worked until 1995. The same phenomenon exists elsewhere in medicine. Deep-brain stimulation involves the implantation of electrodes in the brains of people who suffer from specific movement disorders, such as Parkinson’s disease; it’s been in widespread use for more than twenty years, and some think it should be employed for other purposes, including general cognitive enhancement. No one can say how it works.

This approach to discovery—answers first, explanations later—accrues what I call intellectual debt. It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later. In some cases, we pay off this intellectual debt quickly. But, in others, we let it compound, relying, for decades, on knowledge that’s not fully known.

In the past, intellectual debt has been confined to a few areas amenable to trial-and-error discovery, such as medicine. But that may be changing, as new techniques in artificial intelligence—specifically, machine learning—increase our collective intellectual credit line. Machine-learning systems work by identifying patterns in oceans of data. Using those patterns, they hazard answers to fuzzy, open-ended questions. Provide a neural network with labelled pictures of cats and other, non-feline objects, and it will learn to distinguish cats from everything else; give it access to medical records, and it can attempt to predict a new hospital patient’s likelihood of dying. And yet, most machine-learning systems don’t uncover causal mechanisms. They are statistical-correlation engines. They can’t explain why they think some patients are more likely to die, because they don’t “think” in any colloquial sense of the word—they only answer. As we begin to integrate their insights into our lives, we will, collectively, begin to rack up more and more intellectual debt….(More)”.

Artificial Intelligence and Law: An Overview


Paper by Harry Surden: “Much has been written recently about artificial intelligence (AI) and law. But what is AI, and what is its relation to the practice and administration of law? This article addresses those questions by providing a high-level overview of AI and its use within law. The discussion aims to be nuanced but also understandable to those without a technical background. To that end, I first discuss AI generally. I then turn to AI and how it is being used by lawyers in the practice of law, people and companies who are governed by the law, and government officials who administer the law. A key motivation in writing this article is to provide a realistic, demystified view of AI that is rooted in the actual capabilities of the technology. This is meant to contrast with discussions about AI and law that are decidedly futurist in nature…(More)”.