The (il)logic of legibility – Why governments should stop simplifying complex systems


Thea Snow at LSE Blog: “Sometimes, you learn about an idea that really sticks with you. This happened to me recently when I learnt about “legibility” — a concept which James C Scott introduces in his book Seeing like a State.

Just last week, I was involved in two conversations which highlighted how pervasive the logic of legibility continues to be in influencing how governments think and act. But first, what is legibility?

Defining Legibility

Legibility describes the very human tendency to simplify complex systems in order to exert control over them.

In this blog, Venkatesh Rao offers a recipe for legibility:

  • Look at a complex and confusing reality…
  • Fail to understand all the subtleties of how the complex reality works
  • Attribute that failure to the irrationality of what you are looking at, rather than your own limitations
  • Come up with an idealized blank-slate vision of what that reality ought to look like
  • Argue that the relative simplicity and platonic orderliness of the vision represents rationality
  • Use power to impose that vision, by demolishing the old reality if necessary.

Rao explains: “The big mistake in this pattern of failure is projecting your subjective lack of comprehension onto the object you are looking at, as “irrationality.” We make this mistake because we are tempted by a desire for legibility.”

Scott uses modern forestry practices as an example of the practice of legibility. Hundreds of years ago, forests acted as many things — they were places people harvested wood, but also places where locals went foraging and hunting, as well as an ecosystem for animals and plants. According to the logic of scientific forestry practices, forests would be much more valuable if they just produced timber. To achieve this, they had to be made legible.

So, modern agriculturalists decided to clear cut forest, and plant perfectly straight rows of a particular species of fast-growing trees. It was assumed this would be more efficient. Planting just one species meant the quality of timber would be predictable. In addition, the straight rows would make it easy to know exactly how much timber was there, and would mean timber production could be easily monitored and controlled.

 Reproduced from https://www.ribbonfarm.com/2010/07/26/a-big-little-idea-called-legibility/

For the first generation of trees, the agriculturalists achieved higher yields, and there was much celebration and self-congratulation. But, after about a century, the problems of the ecosystem collapse started to reveal themselves. In imposing a logic of order and control, scientific forestry destroyed the complex, invisible, and unknowable network of relationships between plants, animals and people, which are necessary for a forest to thrive.

After a century it became apparent that relationships between plants and animals were so distorted that pests were destroying crops. The nutrient balance of the soil was disrupted. And after the first generation of trees, the forest was not thriving at all….(More)”.

Robot census: Gathering data to improve policymaking on new technologies


Essay by Robert Seamans: There is understandable excitement about the impact that new technologies like artificial intelligence (AI) and robotics will have on our economy. In our everyday lives, we already see the benefits of these technologies: when we use our smartphones to navigate from one location to another using the fastest available route or when a predictive typing algorithm helps us finish a sentence in our email. At the same time, there are concerns about possible negative effects of these new technologies on labor. The Council of Economic Advisers of the past two Administrations have addressed these issues in the annual Economic Report of the President (ERP). For example, the 2016 ERP included a chapter on technology and innovation that linked robotics to productivity and growth, and the 2019 ERP included a chapter on artificial intelligence that discussed the uneven effects of technological change. Both these chapters used data at highly aggregated levels, in part because that is the data that is available. As I’ve noted elsewhere, AI and robots are everywhere, except, as it turns out, in the data.

To date, there have been no large scale, systematic studies in the U.S. on how robots and AI affect productivity and labor in individual firms or establishments (a firm could own one or more establishments, which for example could be a plant in a manufacturing setting or a storefront in a retail setting). This is because the data are scarce. Academic researchers interested in the effects of AI and robotics on economic outcomes have mostly used aggregate country and industry-level data. Very recently, some have studied these issues at the firm level using data on robot imports to France, Spain, and other countries. I review a few of these academic papers in both categories below, which provide early findings on the nuanced role these new technologies have on labor. Thanks to some excellent work being done by the U.S. Census Bureau, however, we may soon have more data to work with. This includes new questions on robot purchases in the Annual Survey of Manufacturers and Annual Capital Expenditures Survey and new questions on other technologies including cloud computing and machine learning in the Annual Business Survey….(More)”.

Democratizing data in a 5G world


Blog by Dimitrios Dosis at Mastercard: “The next generation of mobile technology has arrived, and it’s more powerful than anything we’ve experienced before. 5G can move data faster, with little delay — in fact, with 5G, you could’ve downloaded a movie in the time you’ve read this far. 5G will also create a vast network of connected machines. The Internet of Things will finally deliver on its promise to fuse all our smart products — vehicles, appliances, personal devices — into a single streamlined ecosystem.

My smartwatch could monitor my blood pressure and schedule a doctor’s appointment, while my car could collect data on how I drive and how much gas I use while behind the wheel. In some cities, petrol trucks already act as roving gas stations, receiving pings when cars are low on gas and refueling them as needed, wherever they are.

This amounts to an incredible proliferation of data. By 2025, every connected person will conduct nearly 5,000 data interactions every day — one every 18 seconds — whether they know it or not. 

Enticing and convenient as new 5G-powered developments may be, it also raises complex questions about data. Namely, who is privy to our personal information? As your smart refrigerator records the foods you buy, will the refrigerator’s manufacturer be able to see your eating habits? Could it sell that information to a consumer food product company for market research without your knowledge? And where would the information go from there? 

People are already asking critical questions about data privacy. In fact, 72% of them say they are paying attention to how companies collect and use their data, according to a global survey released last year by the Harvard Business Review Analytic Services. The survey, sponsored by Mastercard, also found that while 60% of executives believed consumers think the value they get in exchange for sharing their data is worthwhile, only 44% of consumers actually felt that way.

There are many reasons for this data disconnect, including the lack of transparency that currently exists in data sharing and the tension between an individual’s need for privacy and his or her desire for personalization.

This paradox can be solved by putting data in the hands of the people who create it — giving consumers the ability to manage, control and share their own personal information when they want to, with whom they want to, and in a way that benefits them.

That’s the basis of Mastercard’s core set of principles regarding data responsibility – and in this 5G world, it’s more important than ever. We will be able to gain from these new technologies, but this change must come with trust and user control at its core. The data ecosystem needs to evolve from schemes dominated by third parties, where some data brokers collect inferred, often unreliable and inaccurate data, then share it without the consumer’s knowledge….(More)”.

Using “Big Data” to forecast migration


Blog Post by Jasper Tjaden, Andres Arau, Muertizha Nuermaimaiti, Imge Cetin, Eduardo Acostamadiedo, Marzia Rango: Act 1 — High Expectations

“Data is the new oil,” they say. ‘Big Data’ is even bigger than that. The “data revolution” will contribute to solving societies’ problems and help governments adopt better policies and run more effective programs. In the migration field, digital trace data are seen as a potentially powerful tool to improve migration management processes (visa applicationsasylum decision and geographic allocation of asylum seeker, facilitating integration, “smart borders” etc.).1

Forecasting migration is one particular area where big data seems to excite data nerds (like us) and policymakers alike. If there is one way big data has already made a difference, it is its ability to bring different actors together — data scientists, business people and policy makers — to sit through countless slides with numbers, tables and graphs. Traditional migration data sources, like censuses, administrative data and surveys, have never quite managed to generate the same level of excitement.

Many EU countries are currently heavily investing in new ways to forecast migration. Relatively large numbers of asylum seekers in 2014, 2015 and 2016 strained the capacity of many EU governments. Better forecasting tools are meant to help governments prepare in advance.

In a recent European Migration Network study, 10 out of the 22 EU governments surveyed said they make use of forecasting methods, many using open source data for “early warning and risk analysis” purposes. The 2020 European Migration Network conference was dedicated entirely to the theme of forecasting migration, hosting more than 15 expert presentations on the topic. The recently proposed EU Pact on Migration and Asylum outlines a “Migration Preparedness and Crisis Blueprint” which “should provide timely and adequate information in order to establish the updated migration situational awareness and provide for early warning/forecasting, as well as increase resilience to efficiently deal with any type of migration crisis.” (p. 4) The European Commission is currently finalizing a feasibility study on the use of artificial intelligence for predicting migration to the EU; Frontex — the EU Border Agency — is scaling up efforts to forecast irregular border crossings; EASO — the European Asylum Support Office — is devising a composite “push-factor index” and experimenting with forecasting asylum-related migration flows using machine learning and data at scale. In Fall 2020, during Germany’s EU Council Presidency, the German Interior Ministry organized a workshop series around Migration 4.0 highlighting the benefits of various ways to “digitalize” migration management. At the same time, the EU is investing substantial resources in migration forecasting research under its Horizon2020 programme, including QuantMigITFLOWS, and HumMingBird.

Is all this excitement warranted?

Yes, it is….(More)” See also: Big Data for Migration Alliance

The High Price of Mistrust


fs.blog: “There are costs to falling community participation. Rather than simply lamenting the loss of a past golden era (as people have done in every era), Harvard political scientist Robert D. Putnam explains these costs, as well as how we might bring community participation back.

First published twenty years ago, Bowling Alone is an exhaustive, hefty work. In its 544 pages, Putnam negotiated mountains of data to support his thesis that the previous few decades had seen Americans retreat en masse from public life. Putnam argued Americans had become disconnected from their wider communities, as evidenced by changes such as a decline in civic engagement and dwindling membership rates for groups such as bowling leagues and PTAs.

Though aspects of Bowling Alone are a little dated today (“computer-mediated communication” isn’t a phrase you’re likely to have heard recently), a quick glance at 2021’s social landscape would suggest many of the trends Putnam described have only continued and apply in other parts of the world too.

Right now, polarization and social distancing have forced us apart from any sense of community to a degree that can seem irresolvable.

Will we ever bowl in leagues alongside near strangers and turn them into friends again? Will we ever bowl again at all, even if alone, or will those gleaming aisles, too-tight shoes, and overpriced sodas fade into a distant memory we recount to our children?

The idea of going into a public space for a non-essential reason can feel incredibly out of reach for many of us right now. And who knows how spaces like bowling alleys will survive in the long run without the social scenes that fuelled them. Now is a perfect time to revisit Bowling Alone to see what it can still teach us, because many of its warnings and lessons are perhaps more relevant now than at its time of publication.

One key lesson we can derive from Bowling Alone is that the less we trust each other—something which is both a cause and consequence of declining community engagement—the more it costs us. Mistrust is expensive.…(More)”

The Rise of Urban Commons


Blogpost by Alessandra Quarta and Antonio Vercellone: “In the last ten years, the concept of the commons became popular in social studies and political activism and in some countries domestic lawyers have shared the interest for this notion. Even if an (existing or proposed) statutory definition of the commons is still very rare, lawyers get familiar with the concept of the commons through the filter of property law, where such a concept has been quite discredited. In fact, approaching property law, many students of different legal traditions learn the origins of property rights revolving on the “tragedy of the commons”, the “parable” made famous by Garrett Hardin in the late nineteen-sixties. According to this widespread narrative, the impossibility to avoid the over-exploitation of those resources managed through an open-access regime determines the necessity of allocating private property rights. In this classic argument, the commons appear in a negative light: they represent the impossibility for a community to manage shared resources without concentrating all the decision-making powers in the hand of a single owner or of a central government. Moreover, they represent the wasteful inefficiency of the Feudal World.

This vision has dominated social and economic studies until 1998, when Elinor Ostrom published her famous book Governing the commons, offering the results of her research on resources managed by communities in different parts of the world. Ostrom, awarded with the Nobel Prize in 2009, demonstrated that the commons are not necessarily a tragedy and a place of no-law. In fact, local communities generally define principles for their government and sharing in a resilient way avoiding the tragedy to occur. Moreover, Ostrom defined a set of principles for checking if the commons are managed efficiently and can compete with both private and public arrangements of resource management.

Later on, under an institutional perspective, the commons became the tool of contestation of political and economic mainstream dogmas, including the unquestionable efficiency of both the market and private property in the allocation of resources. The research of new tools for managing resources has been carried out in several experimentations that generally occurs at the local and urban level: scholars and practitioners define these experiences as ‘urban commons’….(More)”.

Improved targeting for mobile phone surveys: A public-private data collaboration


Blogpost by Kristen Himelein and Lorna McPherson: “Mobile phone surveys have been rapidly deployed by the World Bank to measure the impact of COVID-19 in nearly 100 countries across the world. Previous posts on this blog have discussed the sampling and  implementation challenges associated with these efforts, and coverage errors are an inherent problem to the approach. The survey methodology literature has shown mobile phone survey respondents in the poorest countries are more likely to be male, urban, wealthier, and more highly educated. This bias can stem from phone ownership, as mobile phone surveys are at best representative of mobile phone owners, a group which, particularly in poor countries, may differ from the overall population; or from differential response rates among these owners, with some groups more or less likely to respond to a call from an unknown number. In this post, we share our experiences in trying to improve representativeness and boost sample sizes for the poor in Papua New Guinea (PNG)….(More)”.

An Open Data Team Experiments with a New Way to Tell City Stories


Article by  Sean Finnan: “Can you see me?” says Mark Linnane, over Zoom, as he walks around a plastic structure on the floor of an office at Maynooth University. “That gives you some sense of the size of it. It’s 3.5 metres by 2.”

Linnane trails his laptop’s webcam over the surface of the off-white 3D model, giving a birds-eye view of tens of thousands of tiny buildings, the trails of roads and the clear pathway of the Liffey.

This replica of the heart of the city from Phoenix Park to Dublin Port was created to scale by the university’s Building City Dashboards team, using data from the Ordnance Survey Ireland.

In the five years since they started to grapple with the question of how to present data about the city in an engaging and accessible way, the team has experimented with virtual reality, and augmented reality – and most recently, with this new form of mapping, which blends the lego-like miniature of Dublin’s centre with changeable data projected on.

This could really come into its own as a public exhibit if they start to tell meaningful data-driven and empirical stories, says Linnane, a digital exhibition developer at Maynooth University.

Stories that are “relevant in terms of the everyday daily lives of people who will be coming to see it”, he says.

Layers of Meaning

Getting the projector that throws the visualisations onto the model to work right was Linnane’s job, he says.

He had to mesh the Ordnance Survey data with others that showed building heights for example. “Every single building down to the sheds in someone’s garden have a unique identifier,” says Linnane.

Projectors are built to project onto flat surfaces and not 3D models so that had to be finessed, too, he says. “Every step on the way was a new development. There wasn’t really a process there before.”

The printed 3D model shows 7km by 4km of Dublin and 122,355 structures, says Linnane. That includes bigger buildings but also small outbuildings, railway platforms, public toilets and glasshouses – all mocked up and serving as a canvas for a kaleidoscope of data.

“We’re just projecting data on to it and seeing what’s going on with that,” says Rob Kitchin, principal investigator at Maynooth University’s Programmable City project….(More)”

Image of model courtesy of Mark Linnane.

2020 was the year activists mastered hashtag flooding


Nicole Gallucci at Mashable: “A lone hashtag might not look very mighty, but when used en masse, the symbols can become incredibly powerful activism tools.

Over the past two decades — largely since product designer Chris Messina pitched hashtags to Twitter in 2007 — activists have learned to harness the symbols to form online communities, raise awareness on pressing issues, organize protests, shape digital narratives, and redirect social media discourse. 

On any given day, a series of hashtags are spotlighted in “Trending” section of Twitter. The hashtags featured are those that have gained traction online and reflect topics being heavily discussed in the moment. More often than not, a trending hashtag’s popularity is organic, but a hashtag’s origin and initial purpose can become clouded when people partake in a clever tactic called hashtag flooding.

Hashtag flooding, or the act of hijacking a hashtag on social media platforms to change its meaning, has been around for years. But in 2020, particularly in the months leading up to the presidential election, activists and social media users looking to make their voices heard used the technique to drown out hateful narratives.

From K-pop fans flooding Donald Trump-related hashtags to members of the gay community putting their own spin on the #ProudBoys hashtag, the method of online communication dominated timelines this year and should be in every activist’s playbook….(More)”.

New York Temporarily Bans Facial Recognition Technology in Schools


Hunton’s Privacy Blog: “On December 22, 2020, New York Governor Andrew Cuomo signed into law legislation that temporarily bans the use or purchase of facial recognition and other biometric identifying technology in public and private schools until at least July 1, 2022. The legislation also directs the New York Commissioner of Education (the “Commissioner”) to conduct a study on whether this technology is appropriate for use in schools.

In his press statement, Governor Cuomo indicated that the legislation comes after concerns were raised about potential risks to students, including issues surrounding misidentification by the technology as well as safety, security and privacy concerns. “This legislation requires state education policymakers to take a step back, consult with experts and address privacy issues before determining whether any kind of biometric identifying technology can be brought into New York’s schools. The safety and security of our children is vital to every parent, and whether to use this technology is not a decision to be made lightly,” the Governor explained.

Key elements of the legislation include:

  • Defining “facial recognition” as “any tool using an automated or semi-automated process that assists in uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s face,” and “biometric identifying technology” as “any tool using an automated or semi-automated process that assists in verifying a person’s identity based on a person’s biometric information”;
  • Prohibiting the purchase and use of facial recognition and other biometric identifying technology in all public and private elementary and secondary schools until July 1, 2022, or until the Commissioner authorizes the purchase and use of such technology, whichever occurs later; and
  • Directing the Commissioner, in consultation with New York’s Office of Information Technology, Division of Criminal Justice Services, Education Department’s Chief Privacy Officer and other stakeholders, to conduct a study and make recommendations as to the circumstances in which facial recognition and other biometric identifying technology is appropriate for use in schools and what restrictions and guidelines should be enacted to protect privacy, civil rights and civil liberties interests….(More)”.