Why Hundreds of Mathematicians Are Boycotting Predictive Policing


Courtney Linder at Popular Mechanics: “Several prominent academic mathematicians want to sever ties with police departments across the U.S., according to a letter submitted to Notices of the American Mathematical Society on June 15. The letter arrived weeks after widespread protests against police brutality, and has inspired over 1,500 other researchers to join the boycott.

These mathematicians are urging fellow researchers to stop all work related to predictive policing software, which broadly includes any data analytics tools that use historical data to help forecast future crime, potential offenders, and victims. The technology is supposed to use probability to help police departments tailor their neighborhood coverage so it puts officers in the right place at the right time….

a flow chart showing how predictive policing works

RAND

According to a 2013 research briefing from the RAND Corporation, a nonprofit think tank in Santa Monica, California, predictive policing is made up of a four-part cycle (shown above). In the first two steps, researchers collect and analyze data on crimes, incidents, and offenders to come up with predictions. From there, police intervene based on the predictions, usually taking the form of an increase in resources at certain sites at certain times. The fourth step is, ideally, reducing crime.

“Law enforcement agencies should assess the immediate effects of the intervention to ensure that there are no immediately visible problems,” the authors note. “Agencies should also track longer-term changes by examining collected data, performing additional analysis, and modifying operations as needed.”

In many cases, predictive policing software was meant to be a tool to augment police departments that are facing budget crises with less officers to cover a region. If cops can target certain geographical areas at certain times, then they can get ahead of the 911 calls and maybe even reduce the rate of crime.

But in practice, the accuracy of the technology has been contested—and it’s even been called racist….(More)”.

Why real-time economic data need to be treated with caution


The Economist: “The global downturn of 2020 is probably the most quantified on record. Economists, firms and statisticians seeking to gauge the depth of the collapse in economic activity and the pace of the recovery have seized upon a new dashboard of previously obscure indicators. Investors eagerly await the release of mobility statistics from tech companies such as Apple or Google, or restaurant-booking data from OpenTable, in a manner once reserved for official inflation and unemployment estimates. Central bankers pepper their speeches with novel barometers of consumer spending. Investment-bank analysts and journalists tout hot new measures of economic activity in the way that hipsters discuss the latest bands. Those who prefer to wait for official measures are regarded as being like fans of u2, a sanctimonious Irish rock group: stuck behind the curve as the rest of the world has moved on.

The main attraction of real-time data to policymakers and investors alike is timeliness. Whereas official, so-called hard data, such as inflation, employment or output measures, tend to be released with a lag of several weeks, or even months, real-time data, as the name suggests, can offer a window on today’s economic conditions. The depth of the downturns induced by covid-19 has put a premium on swift intelligence. The case for hard data has always been their quality, but this has suffered greatly during the pandemic. Compilers of official labour-market figures have struggled to account for furlough schemes and the like, and have plastered their releases with warnings about unusually high levels of uncertainty. Filling in statisticians’ forms has probably fallen to the bottom of firms’ to-do lists, reducing the accuracy of official output measures….

The value of real-time measures will be tested once the swings in economic activity approach a more normal magnitude. Mobility figures for March and April did predict the scale of the collapse in gdp, but that could have been estimated just as easily by stepping outside and looking around (at least in the places where that sort of thing was allowed during lockdown). Forecasters in rich countries are more used to quibbling over whether economies will grow at an annual rate of 2% or 3% than whether output will shrink by 20% or 30% in a quarter. Real-time measures have disappointed before. Immediately after Britain’s vote to leave the European Union in 2016, for instance, the indicators then watched by economists pointed to a sharp slowdown. It never came.

Real-time data, when used with care, have been a helpful supplement to official measures so far this year. With any luck the best of the new indicators will help official statisticians improve the quality and timeliness of their own figures. But, much like u2, the official measures have been around for a long time thanks to their tried and tested formula—and they are likely to stick around for a long time to come….(More)”.

German coronavirus experiment enlists help of concertgoers


Philip Oltermann at the Guardian: “German scientists are planning to equip 4,000 pop music fans with tracking gadgets and bottles of fluorescent disinfectant to get a clearer picture of how Covid-19 could be prevented from spreading at large indoor concerts.

As cultural mass gatherings across the world remain on hold for the foreseeable future, researchers in eastern Germany are recruiting volunteers for a “coronavirus experiment” with the singer-songwriter Tim Bendzko, to be held at an indoor stadium in the city of Leipzig on 22 August.

Participants, aged between 18 and 50, will wear matchstick-sized “contact tracer” devices on chains around their necks that transmit a signal at five-second intervals and collect data on each person’s movements and proximity to other members of the audience.

Inside the venue, they will also be asked to disinfect their hands with a fluorescent hand-sanitiser – designed to not just add a layer of protection but allow scientists to scour the venue with UV lights after the concerts to identify surfaces where a transmission of the virus through smear infection is most likely to take place.

Vapours from a fog machine will help visualise the possible spread of coronavirus via aerosols, which the scientists will try to predict via computer-generated models in advance of the event.

The €990,000 cost of the Restart-19 project will be shouldered between the federal states of Saxony and Saxony-Anhalt. The project’s organisers say the aim is to “identify a framework” for how larger cultural and sports events could be held “without posing a danger for the population” after 30 September….

To stop the Leipzig experiment from becoming the source of a new outbreak, signed-up volunteers will be sent a DIY test kit and have a swab at a doctor’s practice or laboratory 48 hours before the concert starts. Those who cannot show proof of a negative test at the door will be denied entry….(More)”.

Coronavirus: how the pandemic has exposed AI’s limitations


Kathy Peach at The Conversation: “It should have been artificial intelligence’s moment in the sun. With billions of dollars of investment in recent years, AI has been touted as a solution to every conceivable problem. So when the COVID-19 pandemic arrived, a multitude of AI models were immediately put to work.

Some hunted for new compounds that could be used to develop a vaccine, or attempted to improve diagnosis. Some tracked the evolution of the disease, or generated predictions for patient outcomes. Some modelled the number of cases expected given different policy choices, or tracked similarities and differences between regions.

The results, to date, have been largely disappointing. Very few of these projects have had any operational impact – hardly living up to the hype or the billions in investment. At the same time, the pandemic highlighted the fragility of many AI models. From entertainment recommendation systems to fraud detection and inventory management – the crisis has seen AI systems go awry as they struggled to adapt to sudden collective shifts in behaviour.

The unlikely hero

The unlikely hero emerging from the ashes of this pandemic is instead the crowd. Crowds of scientists around the world sharing data and insights faster than ever before. Crowds of local makers manufacturing PPE for hospitals failed by supply chains. Crowds of ordinary people organising through mutual aid groups to look after each other.

COVID-19 has reminded us of just how quickly humans can adapt existing knowledge, skills and behaviours to entirely new situations – something that highly-specialised AI systems just can’t do. At least yet….

In one of the experiments, researchers from the Istituto di Scienze e Tecnologie della Cognizione in Rome studied the use of an AI system designed to reduce social biases in collective decision-making. The AI, which held back information from the group members on what others thought early on, encouraged participants to spend more time evaluating the options by themselves.

The system succeeded in reducing the tendency of people to “follow the herd” by failing to hear diverse or minority views, or challenge assumptions – all of which are criticisms that have been levelled at the British government’s scientific advisory committees throughout the pandemic…(More)”.

A Letter on Justice and Open Debate


Letter in Harpers Magazine signed by 153 prominent artists and intellectuals,: “Our cultural institutions are facing a moment of trial. Powerful protests for racial and social justice are leading to overdue demands for police reform, along with wider calls for greater equality and inclusion across our society, not least in higher education, journalism, philanthropy, and the arts. But this needed reckoning has also intensified a new set of moral attitudes and political commitments that tend to weaken our norms of open debate and toleration of differences in favor of ideological conformity. As we applaud the first development, we also raise our voices against the second. The forces of illiberalism are gaining strength throughout the world and have a powerful ally in Donald Trump, who represents a real threat to democracy. But resistance must not be allowed to harden into its own brand of dogma or coercion—which right-wing demagogues are already exploiting. The democratic inclusion we want can be achieved only if we speak out against the intolerant climate that has set in on all sides.

The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted. While we have come to expect this on the radical right, censoriousness is also spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty. We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought. More troubling still, institutional leaders, in a spirit of panicked damage control, are delivering hasty and disproportionate punishments instead of considered reforms. Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes. Whatever the arguments around each particular incident, the result has been to steadily narrow the boundaries of what can be said without the threat of reprisal. We are already paying the price in greater risk aversion among writers, artists, and journalists who fear for their livelihoods if they depart from the consensus, or even lack sufficient zeal in agreement.

This stifling atmosphere will ultimately harm the most vital causes of our time. The restriction of debate, whether by a repressive government or an intolerant society, invariably hurts those who lack power and makes everyone less capable of democratic participation. The way to defeat bad ideas is by exposure, argument, and persuasion, not by trying to silence or wish them away. We refuse any false choice between justice and freedom, which cannot exist without each other. As writers we need a culture that leaves us room for experimentation, risk taking, and even mistakes. We need to preserve the possibility of good-faith disagreement without dire professional consequences. If we won’t defend the very thing on which our work depends, we shouldn’t expect the public or the state to defend it for us….(More)”.

How urban design can make or break protests


Peter Schwartzstein in Smithsonian Magazine: “If protesters could plan a perfect stage to voice their grievances, it might look a lot like Athens, Greece. Its broad, yet not overly long, central boulevards are almost tailor-made for parading. Its large parliament-facing square, Syntagma, forms a natural focal point for marchers. With a warren of narrow streets surrounding the center, including the rebellious district of Exarcheia, it’s often remarkably easy for demonstrators to steal away if the going gets rough.

Los Angeles, by contrast, is a disaster for protesters. It has no wholly recognizable center, few walkable distances, and little in the way of protest-friendly space. As far as longtime city activists are concerned, just amassing small crowds can be an achievement. “There’s really just no place to go, the city is structured in a way that you’re in a city but you’re not in a city,” says David Adler, general coordinator at the Progressive International, a new global political group. “While a protest is the coming together of a large group of people and that’s just counter to the idea of L.A.”

Among the complex medley of moving parts that guide protest movements, urban design might seem like a fairly peripheral concern. But try telling that to demonstrators from Houston to Beijing, two cities that have geographic characteristics that complicate public protest. Low urban density can thwart mass participation. Limited public space can deprive protesters of the visibility and hence the momentum they need to sustain themselves. On those occasions when proceedings turn messy or violent, alleyways, parks, and labyrinthine apartment buildings can mean the difference between detention and escape….(More)”.

The Shape of Epidemics


Essay by David S. Jones and Stefan Helmreich: “…Is the most recent rise in new cases—the sharp increase in case counts and hospitalizations reported this week in several states—a second wave, or rather a second peak of a first wave? Will the world see a devastating second wave in the fall?

Such imagery of waves has pervaded talk about the propagation of the infection from the beginning. On January 29, just under a month after the first instances of COVID-19 were reported in Wuhan, Chinese health officials published a clinical report about their first 425 cases, describing them as “the first wave of the epidemic.” On March 4 the French epidemiologist Antoine Flahault asked, “Has China just experienced a herald wave, to use terminology borrowed from those who study tsunamis, and is the big wave still to come?” The Asia Times warned shortly thereafter that “a second deadly wave of COVID-19 could crash over China like a tsunami.” A tsunami, however, struck elsewhere, with the epidemic surging in Iran, Italy, France, and then the United States. By the end of April, with the United States having passed one million cases, the wave forecasts had become bleaker. Prominent epidemiologists predicted three possible future “wave scenarios”—described by one Boston reporter as “seascapes,” characterized either by oscillating outbreaks, the arrival of a “monster wave,” or a persistent and rolling crisis.


From Kristine Moore et al., “The Future of the COVID-19 Pandemic” (April 30, 2020). Used with permission from the Center for Infectious Disease Research and Policy, University of Minnesota.

While this language may be new to much of the public, the figure of the wave has long been employed to describe, analyze, and predict the behavior of epidemics. Understanding this history can help us better appreciate the conceptual inheritances of a scientific discipline suddenly at the center of public discussion. It can also help us judge the utility as well as limitations of those representations of epidemiological waves now in play in thinking about the science and policy of COVID-19. As the statistician Edward Tufte writes in his classic work The Visual Display of Quantitative Information (1983), “At their best, graphics are instruments for reasoning about quantitative information.” The wave, operating as a hybrid of the diagrammatic, mathematical, and pictorial, certainly does help to visualize and think about COVID-19 data, but it also does much more. The wave image has become an instrument for public health management and prediction—even prophecy—offering a synoptic, schematic view of the dynamics it describes.

This essay sketches this backstory of epidemic waves, which falls roughly into three eras: waves emerge first as a device of data visualization, then evolve into an object of mathematical modeling and causal investigation and finally morph into a tool of persuasion, intervention, and governance. Accounts of the wave-like rise and fall of rates of illness and death in populations first appeared in the mid-nineteenth century, with England a key player in developments that saw government officials collect data permitting the graphical tabulation of disease trends over time. During this period the wave image was primarily metaphorical, a heuristic way of talking about patterns in data. Using curving numerical plots, epidemiologists offered analogies between the spread of infection and the travel of waves, sometimes transposing the temporal tracing of epidemic data onto maps of geographical space. Exactly what mix of forces—natural or social—generated these “epidemic waves” remained a source of speculation….(More)”.

Politicians ignore far-out risks: they need to up their game


The Economist: “In 1993 this newspaper told the world to watch the skies. At the time, humanity’s knowledge of asteroids that might hit the Earth was woefully inadequate. Like nuclear wars and large volcanic eruptions, the impacts of large asteroids can knock seven bells out of the climate; if one thereby devastated a few years’ worth of harvests around the globe it would kill an appreciable fraction of the population. Such an eventuality was admittedly highly unlikely. But given the consequences, it made actuarial sense to see if any impact was on the cards, and at the time no one was troubling themselves to look.

Asteroid strikes were an extreme example of the world’s wilful ignorance, perhaps—but not an atypical one. Low-probability, high-impact events are a fact of life. Individual humans look for protection from them to governments and, if they can afford it, insurers. Humanity, at least as represented by the world’s governments, reveals instead a preference to ignore them until forced to react—even when foresight’s price-tag is small. It is an abdication of responsibility and a betrayal of the future.

Covid-19 offers a tragic example. Virologists, epidemiologists and ecologists have warned for decades of the dangers of a flu-like disease spilling over from wild animals. But when sarscov-2 began to spread very few countries had the winning combination of practical plans, the kit those plans required in place and the bureaucratic capacity to enact them. Those that did benefited greatly. Taiwan has, to date, seen just seven covid-19 deaths; its economy has suffered correspondingly less.

Pandemics are disasters that governments have experience of. What therefore of truly novel threats? The blazing hot corona which envelops the Sun—seen to spectacular effect during solar eclipses—intermittently throws vast sheets of charged particles out into space. These cause the Northern and Southern Lights and can mess up electric grids and communications. But over the century or so in which electricity has become crucial to much of human life, the Earth has never been hit by the largest of these solar eructations. If a coronal mass ejection (cme) were to hit, all sorts of satellite systems needed for navigation, communications and warnings of missile attacks would be at risk. Large parts of the planet could face months or even years without reliable grid electricity (see Briefing). The chances of such a disaster this century are put by some at better than 50:50. Even if they are not that high, they are still higher than the chances of a national leader knowing who in their government is charged with thinking about such things.

The fact that no governments have ever seen a really big cme, or a volcanic eruption large enough to affect harvests around the world—the most recent was Tambora, in 1815—may explain their lack of forethought. It does not excuse it. Keeping an eye on the future is part of what governments are for. Scientists have provided them with the tools for such efforts, but few academics will undertake the work unbidden, unfunded and unsung. Private business may take some steps when it perceives specific risks, but it will not put together plans for society at large….(More)”.

Peer-Reviewed Scientific Journals Don’t Really Do Their Job


Article by Simine Vazire: “THE RUSH FOR scientific cures and treatments for Covid-19 has opened the floodgates of direct communication between scientists and the public. Instead of waiting for their work to go through the slow process of peer review at scientific journals, scientists are now often going straight to print themselves, posting write-ups of their work to public servers as soon as they’re complete. This disregard for the traditional gatekeepers has led to grave concerns among both scientists and commentators: Might not shoddy science—and dangerous scientific errors—make its way into the media, and spread before an author’s fellow experts can correct it? As two journalism professors suggested in an op-ed last month for The New York Times, it’s possible the recent spread of so-called preprints has only “sown confusion and discord with a general public not accustomed to the high level of uncertainty inherent in science.”

There’s another way to think about this development, however. Instead of showing (once again) that formal peer review is vital for good science, the last few months could just as well suggest the opposite. To me, at least—someone who’s served as an editor at seven different journals, and editor in chief at two—the recent spate of decisions to bypass traditional peer review gives the lie to a pair of myths that researchers have encouraged the public to believe for years: First, that peer-reviewed journals publish only trustworthy science; and second, that trustworthy science is published only in peer-reviewed journals.

Scientists allowed these myths to spread because it was convenient for us. Peer-reviewed journals came into existence largely to keep government regulators off our backs. Scientists believe that we are the best judges of the validity of each other’s work. That’s very likely true, but it’s a huge leap from that to “peer-reviewed journals publish only good science.” The most selective journals still allow flawed studies—even really terribly flawed ones—to be published all the time. Earlier this month, for instance, the journal Proceedings of the National Academy of Sciences put out a paper claiming that mandated face coverings are “the determinant in shaping the trends of the pandemic.” PNAS is a very prestigious journal, and their website claims that they are an “authoritative source” that works “to publish only the highest quality scientific research.” However, this paper was quickly and thoroughly criticized on social media; by last Thursday, 45 researchers had signed a letter formally calling for its retraction.

Now the jig is up. Scientists are writing papers that they want to share as quickly as possible, without waiting the months or sometimes years it takes to go through journal peer review. So they’re ditching the pretense that journals are a sure-fire quality control filter, and sharing their papers as self-published PDFs. This might be just the shakeup that peer review needs….(More)”.

A Council of Citizens Should Regulate Algorithms


Federica Carugati at Wired: “…A new report by OpenAI suggests we should create external auditing bodies to evaluate the societal impact of algorithm-based decisions. But the report does not specify what such bodies should look like.

We don’t know how to regulate algorithms, because their application to societal problems involves a fundamental incongruity. Algorithms follow logical rules in order to optimize for a given outcome. Public policy is all a matter of trade-offs: optimizing for some groups in society necessarily makes others worse off.

Resolving social trade-offs requires that many different voices be heard. This may sound radical, but it is in fact the original lesson of democracy: Citizens should have a say. We don’t know how to regulate algorithms, because we have become shockingly bad at citizen governance.

Is citizen governance feasible today? Sure, it is. We know from social scientists that a diverse group of people can make very good decisions. We also know from a number of recent experiments that citizens can be called upon to make decisions on very tough policy issues, including climate change, and even to shape constitutions. Finally, we can draw from the past for inspiration on how to actually build citizen-run institutions.

The ancient Athenians—the citizens of the world’s first large-scale experiment in democracy—built an entire society on the principle of citizen governance. One institution stands out for our purposes: the Council of Five Hundred, a deliberative body in charge of all decisionmaking, from war to state finance to entertainment. Every year, 50 citizens from each of the 10 tribes were selected by lot to serve. Selection occurred among those that had not served the year before and had not already served twice.

These simple organizational rules facilitated broad participation, knowledge aggregation, and citizen learning. First, because the term was limited and could not be iterated more than twice, over time a broad section of the population—rich and poor, educated and not—participated in decisionmaking. Second, because the council represented the whole population (each tribe integrated three different geographic constituencies), it could draw upon the diverse knowledge of its members. Third, at the end of their mandate, councillors returned home with a body of knowledge about the affairs of their city that they could share with their families, friends, and coworkers, some of whom already served and some who soon would. Certainly, the Athenians did not follow through on their commitment to inclusion. As a result, many people’s voices went unheard, including those of women, foreigners, and slaves. But we don’t need to follow the Athenian example on this front.

A citizen council for algorithms modeled on the Athenian example would represent the entire American citizen population. We already do this with juries (although it is possible that, when decisions affect a specific constituency, a better fit with the actual polity might be required). Citizens’ deliberations would be informed by agency self-assessments and algorithmic impact statements for decision systems used by government agencies, and internal auditing reports for industry, as well as reports from investigative journalists and civil society activists, whenever available. Ideally, the council would act as an authoritative body or as an advisory board to an existing regulatory agency….(More)”.