How urban design can make or break protests


Peter Schwartzstein in Smithsonian Magazine: “If protesters could plan a perfect stage to voice their grievances, it might look a lot like Athens, Greece. Its broad, yet not overly long, central boulevards are almost tailor-made for parading. Its large parliament-facing square, Syntagma, forms a natural focal point for marchers. With a warren of narrow streets surrounding the center, including the rebellious district of Exarcheia, it’s often remarkably easy for demonstrators to steal away if the going gets rough.

Los Angeles, by contrast, is a disaster for protesters. It has no wholly recognizable center, few walkable distances, and little in the way of protest-friendly space. As far as longtime city activists are concerned, just amassing small crowds can be an achievement. “There’s really just no place to go, the city is structured in a way that you’re in a city but you’re not in a city,” says David Adler, general coordinator at the Progressive International, a new global political group. “While a protest is the coming together of a large group of people and that’s just counter to the idea of L.A.”

Among the complex medley of moving parts that guide protest movements, urban design might seem like a fairly peripheral concern. But try telling that to demonstrators from Houston to Beijing, two cities that have geographic characteristics that complicate public protest. Low urban density can thwart mass participation. Limited public space can deprive protesters of the visibility and hence the momentum they need to sustain themselves. On those occasions when proceedings turn messy or violent, alleyways, parks, and labyrinthine apartment buildings can mean the difference between detention and escape….(More)”.

The Shape of Epidemics


Essay by David S. Jones and Stefan Helmreich: “…Is the most recent rise in new cases—the sharp increase in case counts and hospitalizations reported this week in several states—a second wave, or rather a second peak of a first wave? Will the world see a devastating second wave in the fall?

Such imagery of waves has pervaded talk about the propagation of the infection from the beginning. On January 29, just under a month after the first instances of COVID-19 were reported in Wuhan, Chinese health officials published a clinical report about their first 425 cases, describing them as “the first wave of the epidemic.” On March 4 the French epidemiologist Antoine Flahault asked, “Has China just experienced a herald wave, to use terminology borrowed from those who study tsunamis, and is the big wave still to come?” The Asia Times warned shortly thereafter that “a second deadly wave of COVID-19 could crash over China like a tsunami.” A tsunami, however, struck elsewhere, with the epidemic surging in Iran, Italy, France, and then the United States. By the end of April, with the United States having passed one million cases, the wave forecasts had become bleaker. Prominent epidemiologists predicted three possible future “wave scenarios”—described by one Boston reporter as “seascapes,” characterized either by oscillating outbreaks, the arrival of a “monster wave,” or a persistent and rolling crisis.


From Kristine Moore et al., “The Future of the COVID-19 Pandemic” (April 30, 2020). Used with permission from the Center for Infectious Disease Research and Policy, University of Minnesota.

While this language may be new to much of the public, the figure of the wave has long been employed to describe, analyze, and predict the behavior of epidemics. Understanding this history can help us better appreciate the conceptual inheritances of a scientific discipline suddenly at the center of public discussion. It can also help us judge the utility as well as limitations of those representations of epidemiological waves now in play in thinking about the science and policy of COVID-19. As the statistician Edward Tufte writes in his classic work The Visual Display of Quantitative Information (1983), “At their best, graphics are instruments for reasoning about quantitative information.” The wave, operating as a hybrid of the diagrammatic, mathematical, and pictorial, certainly does help to visualize and think about COVID-19 data, but it also does much more. The wave image has become an instrument for public health management and prediction—even prophecy—offering a synoptic, schematic view of the dynamics it describes.

This essay sketches this backstory of epidemic waves, which falls roughly into three eras: waves emerge first as a device of data visualization, then evolve into an object of mathematical modeling and causal investigation and finally morph into a tool of persuasion, intervention, and governance. Accounts of the wave-like rise and fall of rates of illness and death in populations first appeared in the mid-nineteenth century, with England a key player in developments that saw government officials collect data permitting the graphical tabulation of disease trends over time. During this period the wave image was primarily metaphorical, a heuristic way of talking about patterns in data. Using curving numerical plots, epidemiologists offered analogies between the spread of infection and the travel of waves, sometimes transposing the temporal tracing of epidemic data onto maps of geographical space. Exactly what mix of forces—natural or social—generated these “epidemic waves” remained a source of speculation….(More)”.

Politicians ignore far-out risks: they need to up their game


The Economist: “In 1993 this newspaper told the world to watch the skies. At the time, humanity’s knowledge of asteroids that might hit the Earth was woefully inadequate. Like nuclear wars and large volcanic eruptions, the impacts of large asteroids can knock seven bells out of the climate; if one thereby devastated a few years’ worth of harvests around the globe it would kill an appreciable fraction of the population. Such an eventuality was admittedly highly unlikely. But given the consequences, it made actuarial sense to see if any impact was on the cards, and at the time no one was troubling themselves to look.

Asteroid strikes were an extreme example of the world’s wilful ignorance, perhaps—but not an atypical one. Low-probability, high-impact events are a fact of life. Individual humans look for protection from them to governments and, if they can afford it, insurers. Humanity, at least as represented by the world’s governments, reveals instead a preference to ignore them until forced to react—even when foresight’s price-tag is small. It is an abdication of responsibility and a betrayal of the future.

Covid-19 offers a tragic example. Virologists, epidemiologists and ecologists have warned for decades of the dangers of a flu-like disease spilling over from wild animals. But when sarscov-2 began to spread very few countries had the winning combination of practical plans, the kit those plans required in place and the bureaucratic capacity to enact them. Those that did benefited greatly. Taiwan has, to date, seen just seven covid-19 deaths; its economy has suffered correspondingly less.

Pandemics are disasters that governments have experience of. What therefore of truly novel threats? The blazing hot corona which envelops the Sun—seen to spectacular effect during solar eclipses—intermittently throws vast sheets of charged particles out into space. These cause the Northern and Southern Lights and can mess up electric grids and communications. But over the century or so in which electricity has become crucial to much of human life, the Earth has never been hit by the largest of these solar eructations. If a coronal mass ejection (cme) were to hit, all sorts of satellite systems needed for navigation, communications and warnings of missile attacks would be at risk. Large parts of the planet could face months or even years without reliable grid electricity (see Briefing). The chances of such a disaster this century are put by some at better than 50:50. Even if they are not that high, they are still higher than the chances of a national leader knowing who in their government is charged with thinking about such things.

The fact that no governments have ever seen a really big cme, or a volcanic eruption large enough to affect harvests around the world—the most recent was Tambora, in 1815—may explain their lack of forethought. It does not excuse it. Keeping an eye on the future is part of what governments are for. Scientists have provided them with the tools for such efforts, but few academics will undertake the work unbidden, unfunded and unsung. Private business may take some steps when it perceives specific risks, but it will not put together plans for society at large….(More)”.

Peer-Reviewed Scientific Journals Don’t Really Do Their Job


Article by Simine Vazire: “THE RUSH FOR scientific cures and treatments for Covid-19 has opened the floodgates of direct communication between scientists and the public. Instead of waiting for their work to go through the slow process of peer review at scientific journals, scientists are now often going straight to print themselves, posting write-ups of their work to public servers as soon as they’re complete. This disregard for the traditional gatekeepers has led to grave concerns among both scientists and commentators: Might not shoddy science—and dangerous scientific errors—make its way into the media, and spread before an author’s fellow experts can correct it? As two journalism professors suggested in an op-ed last month for The New York Times, it’s possible the recent spread of so-called preprints has only “sown confusion and discord with a general public not accustomed to the high level of uncertainty inherent in science.”

There’s another way to think about this development, however. Instead of showing (once again) that formal peer review is vital for good science, the last few months could just as well suggest the opposite. To me, at least—someone who’s served as an editor at seven different journals, and editor in chief at two—the recent spate of decisions to bypass traditional peer review gives the lie to a pair of myths that researchers have encouraged the public to believe for years: First, that peer-reviewed journals publish only trustworthy science; and second, that trustworthy science is published only in peer-reviewed journals.

Scientists allowed these myths to spread because it was convenient for us. Peer-reviewed journals came into existence largely to keep government regulators off our backs. Scientists believe that we are the best judges of the validity of each other’s work. That’s very likely true, but it’s a huge leap from that to “peer-reviewed journals publish only good science.” The most selective journals still allow flawed studies—even really terribly flawed ones—to be published all the time. Earlier this month, for instance, the journal Proceedings of the National Academy of Sciences put out a paper claiming that mandated face coverings are “the determinant in shaping the trends of the pandemic.” PNAS is a very prestigious journal, and their website claims that they are an “authoritative source” that works “to publish only the highest quality scientific research.” However, this paper was quickly and thoroughly criticized on social media; by last Thursday, 45 researchers had signed a letter formally calling for its retraction.

Now the jig is up. Scientists are writing papers that they want to share as quickly as possible, without waiting the months or sometimes years it takes to go through journal peer review. So they’re ditching the pretense that journals are a sure-fire quality control filter, and sharing their papers as self-published PDFs. This might be just the shakeup that peer review needs….(More)”.

A Council of Citizens Should Regulate Algorithms


Federica Carugati at Wired: “…A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like.

We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off.

Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance.

Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change, and even to shape constitutions. Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions.

The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice.

These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front.

A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency….(More)”.

Wrongfully Accused by an Algorithm


Kashmir Hill at the New York Times: “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit….

The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police, according to their report.

Five months later, in March 2019, Jennifer Coulson, a digital image examiner for the Michigan State Police, uploaded a “probe image” — a still from the video, showing the man in the Cardinals cap — to the state’s facial recognition database. The system would have mapped the man’s face and searched for similar ones in a collection of 49 million photos.

The state’s technology is supplied for $5.5 million by a company called DataWorks Plus. Founded in South Carolina in 2000, the company first offered mug shot management software, said Todd Pastorini, a general manager. In 2005, the firm began to expand the product, adding face recognition tools developed by outside vendors.

When one of these subcontractors develops an algorithm for recognizing faces, DataWorks attempts to judge its effectiveness by running searches using low-quality images of individuals it knows are present in a system. “We’ve tested a lot of garbage out there,” Mr. Pastorini said. These checks, he added, are not “scientific” — DataWorks does not formally measure the systems’ accuracy or bias.

“We’ve become a pseudo-expert in the technology,” Mr. Pastorini said.

In Michigan, the DataWorks software used by the state police incorporates components developed by the Japanese tech giant NEC and by Rank One Computing, based in Colorado, according to Mr. Pastorini and a state police spokeswoman. In 2019, algorithms from both companies were included in a federal study of over 100 facial recognition systems that found they were biased, falsely identifying African-American and Asian faces 10 times to 100 times more than Caucasian faces….(More)“.

Nightmare of the Imaginaries: A Critique of Socio-technical Imaginaries Commonly Applied to Governance


Essay by Paul Waller: “This essay aims to analyse and debunk several technology-related concepts commonly discussed in papers, reports and speeches by academics, consultancies, politicians and governmental bodies. Each reflects a presumption about how technology, the internet in particular, and technology-enabled social and political processes might affect the practice of governing. The discussion characterizes the concepts as “socio-technical imaginaries”, a term for ideas that link the socio-political environment with technology. Socio-technical imaginaries start as a description of potentially attainable futures, turn into a prescription of futures that ought to be attained, then become received wisdom about the present day. They are speculation that takes root through reuse and endorsement by authoritative figures, becoming an asserted present reality on the basis of little or no evidence. Once imaginaries become widely accepted and used, they may shape trajectories of research and innovation, steering technological progress as well as public and private expenditure. The imaginaries addressed are: Public Sector Innovation, Digital Transformation of Government, Co-creation & Co-production of Public Services, Crowd-sourcing, Wisdom of Crowds, Collaborative Governance, Customer/Citizen Centricity, Once-only Principle, Personalisation, Big Data, Nudge (Behavioral Insights), Platform Government/GaaP, and Online Participation.

Four questions are posed to critique each imaginary: What is the received wisdom? What does that really mean? What is the problem/what has gone wrong? What to do better/what should it be? As a whole package, these imaginaries represent a nightmare for liberal, representative democracy. Some may enable the “panoptic” state, others may undermine existing institutions to open a void for it to step into. Many have the likelihood of creating or reinforcing inequality of opportunity, outcome or influence. But their grip is hard to loosen. The notions that they are inevitable or that issues will be resolved in due course by technology itself need to be challenged by surfacing the human, social and political dimensions and actively addressing them….(More)”.

IRS Used Cellphone Location Data to Try to Find Suspects


Byron Tau at the Wall Street Journal: “The Internal Revenue Service attempted to identify and track potential criminal suspects by purchasing access to a commercial database that records the locations of millions of American cellphones.

The IRS Criminal Investigation unit, or IRS CI, had a subscription to access the data in 2017 and 2018, and the way it used the data was revealed last week in a briefing by IRS CI officials to Sen. Ron Wyden’s (D., Ore.) office. The briefing was described to The Wall Street Journal by an aide to the senator.

IRS CI officials told Mr. Wyden’s office that their lawyers had given verbal approval for the use of the database, which is sold by a Virginia-based government contractor called Venntel Inc. Venntel obtains anonymized location data from the marketing industry and resells it to governments. IRS CI added that it let its Venntel subscription lapse after it failed to locate any targets of interest during the year it paid for the service, according to Mr. Wyden’s aide.

Justin Cole, a spokesman for IRS CI, said it entered into a “limited contract with Venntel to test their services against the law enforcement requirements of our agency.” IRS CI pursues the most serious and flagrant violations of tax law, and it said it used the Venntel database in “significant money-laundering, cyber, drug and organized-crime cases.”

The episode demonstrates a growing law enforcement interest in reams of anonymized cellphone movement data collected by the marketing industry. Government entities can try to use the data to identify individuals—which in many cases isn’t difficult with such databases.

It also shows that data from the marketing industry can be used as an alternative to obtaining data from cellphone carriers, a process that requires a court order. Until 2018, prosecutors needed “reasonable grounds” to seek cell tower records from a carrier. In June 2018, the U.S. Supreme Court strengthened the requirement to show probable cause a crime has been committed before such data can be obtained from carriers….(More)”

How Facebook, Twitter and other data troves are revolutionizing social science


Heidi Ledford at Nature: “Elizaveta Sivak spent nearly a decade training as a sociologist. Then, in the middle of a research project, she realized that she needed to head back to school.

Sivak studies families and childhood at the National Research University Higher School of Economics in Moscow. In 2015, she studied the movements of adolescents by asking them in a series of interviews to recount ten places that they had visited in the past five days. A year later, she had analysed the data and was feeling frustrated by the narrowness of relying on individual interviews, when a colleague pointed her to a paper analysing data from the Copenhagen Networks Study, a ground-breaking project that tracked the social-media contacts, demographics and location of about 1,000 students, with five-minute resolution, over five months1. She knew then that her field was about to change. “I realized that these new kinds of data will revolutionize social science forever,” she says. “And I thought that it’s really cool.”

With that, Sivak decided to learn how to program, and join the revolution. Now, she and other computational social scientists are exploring massive and unruly data sets, extracting meaning from society’s digital imprint. They are tracking people’s online activities; exploring digitized books and historical documents; interpreting data from wearable sensors that record a person’s every step and contact; conducting online surveys and experiments that collect millions of data points; and probing databases that are so large that they will yield secrets about society only with the help of sophisticated data analysis.

Over the past decade, researchers have used such techniques to pick apart topics that social scientists have chased for more than a century: from the psychological underpinnings of human morality, to the influence of misinformation, to the factors that make some artists more successful than others. One study uncovered widespread racism in algorithms that inform health-care decisions2; another used mobile-phone data to map impoverished regions in Rwanda3.

“The biggest achievement is a shift in thinking about digital behavioural data as an interesting and useful source”, says Markus Strohmaier, a computational social scientist at the GESIS Leibniz Institute for the Social Sciences in Cologne, Germany.

Not everyone has embraced that shift. Some social scientists are concerned that the computer scientists flooding into the field with ambitions as big as their data sets are not sufficiently familiar with previous research. Another complaint is that some computational researchers look only at patterns and do not consider the causes, or that they draw weighty conclusions from incomplete and messy data — often gained from social-media platforms and other sources that are lacking in data hygiene.

The barbs fly both ways. Some computational social scientists who hail from fields such as physics and engineering argue that many social-science theories are too nebulous or poorly defined to be tested.

This all amounts to “a power struggle within the social-science camp”, says Marc Keuschnigg, an analytical sociologist at Linköping University in Norrköping, Sweden. “Who in the end succeeds will claim the label of the social sciences.”

But the two camps are starting to merge. “The intersection of computational social science with traditional social science is growing,” says Keuschnigg, pointing to the boom in shared journals, conferences and study programmes. “The mutual respect is growing, also.”…(More)”.

Normalizing Health-Positive Technology


Article by By Sara J. Singer, Stephen Downs, Grace Ann Joseph, Neha Chaudhary, Christopher Gardner, Nina Hersher, Kelsey P. Mellard, Norma Padrón & Yennie Solheim: “….Aligning the technology sector with a societal goal of greater health and well-being entails a number of shifts in thinking. The most fundamental is understanding health not as a vertical market segment, but as a horizontal value: In addition to developing a line of health products or services, health should be expressed across a company’s full portfolio of products and services. Rather than pushing behaviors on people through information and feedback, technology companies should also pull behaviors from people by changing the environment and products they are offered; in addition to developing technology to help people overcome the challenge of being healthy, we need to envision technology that helps to reduce the challenges to being healthy. And in addition to holding individuals responsible for choices that they make, we also need to recognize the collective responsibility that society bears for the choices it makes available.

How to catalyze these shifts?

To find out, we convened a “tech-enabled health,” in which 50 entrepreneurs, leaders from large technology companies, investors, policymakers, clinicians, and public health experts designed a hands-on, interactive, and substantively focused agenda. Participants brainstormed ways that consumer-facing technologies could help people move more, eat better, sleep well, stay socially connected, and reduce stress. In groups and collectively, participants also considered ways in which ideas related and might be synergistic, potential barriers and contextual conditions that might impede or support transformation, and strategies for catalyzing the desired shift. Participants were mixed in terms of sector, discipline, and gender (though the attendees were not as diverse in terms of race/ethnicity or economic strata as the users we potentially wanted to impact—a limitation noted by participants). We intentionally maintained a positive tone, emphasizing potential benefits of shifting toward a health-positive approach, rather than bemoaning the negative role that technology can play….(More)”.