The necessity of judgment


Essay by Jeff Malpas in AI and Society: “In 2016, the Australian Government launched an automated debt recovery system through Centrelink—its Department of Human Services. The system, which came to be known as ‘Robodebt’, matched the tax records of welfare recipients with their declared incomes as held by Ethe Department and then sent out debt notices to recipients demanding payment. The entire system was computerized, and many of those receiving debt notices complained that the demands for repayment they received were false or inaccurate as well as unreasonable—all the more so given that those being targeted were, almost by definition, those in already vulnerable circumstances. The system provoked enormous public outrage, was subjected to successful legal challenge, and after being declared unlawful, the Government paid back all of the payments that had been received, and eventually, after much prompting, issued an apology.

The Robodebt affair is characteristic of a more general tendency to shift to systems of automated decision-making across both the public and the private sector and to do so even when those systems are flawed and known to be so. On the face of it, this shift is driven by the belief that automated systems have the capacity to deliver greater efficiencies and economies—in the Robodebt case, to reduce costs by recouping and reducing social welfare payments. In fact, the shift is characteristic of a particular alliance between digital technology and a certain form of contemporary bureaucratised capitalism. In the case of the automated systems we see in governmental and corporate contexts—and in many large organisations—automation is a result both of the desire on the part of software, IT, and consultancy firms to increase their customer base as well as expand the scope of their products and sales, and of the desire on the part of governments and organisations to increase control at the same time as they reduce their reliance on human judgment and capacity. The fact is, such systems seldom deliver the efficiencies or economies they are assumed to bring, and they also give rise to significant additional costs in terms of their broader impact and consequences, but the imperatives of sales and seemingly increased control (as well as an irrational belief in the benefits of technological solutions) over-ride any other consideration. The turn towards automated systems like Robodebt is, as is now widely recognised, a common feature of contemporary society. To look to a completely different domain, new military technologies are being developed to provide drone weapon systems with the capacity to identify potential threats and defend themselves against them. The development is spawning a whole new field of military ethics-based entirely around the putative ‘right to self-defence’ of automated weapon systems.

In both cases, the drone weapon system and Robodebt, we have instances of the development of automated systems that seem to allow for a form of ‘judgment’ that appears to operate independently of human judgment—hence the emphasis on this systems as autonomous. One might argue—and typically it is so argued—that any flaws that such systems currently present can be overcome either through the provision of more accurate information or through the development of more complex forms of artificial intelligence….(More)”.

The Potential Role Of Open Data In Mitigating The COVID-19 Pandemic: Challenges And Opportunities


Essay by Sunyoung Pyo, Luigi Reggi and Erika G. Martin: “…There is one tool for the COVID-19 response that was not as robust in past pandemics: open data. For about 15 years, a “quiet open data revolution” has led to the widespread availability of governmental data that are publicly accessible, available in multiple formats, free of charge, and with unlimited use and distribution rights. The underlying logic of open data’s value is that diverse users including researchers, practitioners, journalists, application developers, entrepreneurs, and other stakeholders will synthesize the data in novel ways to develop new insights and applications. Specific products have included providing the public with information about their providers and health care facilities, spotlighting issues such as high variation in the cost of medical procedures between facilities, and integrating food safety inspection reports into Yelp to help the public make informed decisions about where to dine. It is believed that these activities will in turn empower health care consumers and improve population health.

Here, we describe several use cases whereby open data have already been used globally in the COVID-19 response. We highlight major challenges to using these data and provide recommendations on how to foster a robust open data ecosystem to ensure that open data can be leveraged in both this pandemic and future public health emergencies…(More)” See also Repository of Open Data for Covid19 (OECD/TheGovLab)

Learning like a State: Statecraft in the Digital Age


Essay by Marion Fourcade and Jeff Gordon: “…Recent books have argued that we live in an age of “informational” or “surveillance” capitalism, a new form of market governance marked by the accumulation and assetization of information, and by the dominance of platforms as sites of value extraction. Over the last decade-plus, both actual and idealized governance have been transformed by a combination of neoliberal ideology, new technologies for tracking and ranking populations, and the normative model of the platform behemoths, which carry the banner of technological modernity. In concluding a review of Julie Cohen’s and Shoshana Zuboff’s books, Amy Kapcyznski asks how we might build public power sufficient to govern the new private power. Answering that question, we believe, requires an honest reckoning with how public power has been warped by the same ideological, technological, and legal forces that brought about informational capitalism.

In our contribution to the inaugural JLPE issue, we argue that governments and their agents are starting to conceive of their role differently than in previous techno-social moments. Our jumping-off point is the observation that what may first appear as mere shifts in the state’s use of technology—from the “open data” movement to the NSA’s massive surveillance operation—actually herald a deeper transformation in the nature of statecraft itself. By “statecraft,” we mean the state’s mode of learning about society and intervening in it. We contrast what we call the “dataist” state with its high modernist predecessor, as portrayed memorably by the anthropologist James C. Scott, and with neoliberal governmentality, described by, among others, Michel Foucault and Wendy Brown.

The high modernist state expanded the scope of sovereignty by imposing borders, taking censuses, and coercing those on the outskirts of society into legibility through broad categorical lenses. It deployed its power to support large public projects, such as the reorganization of urban infrastructure. As the ideological zeitgeist evolved toward neoliberalism in the 1970s, however, the priority shifted to shoring up markets, and the imperative of legibility trickled down to the individual level. The poor and working class were left to fend for their rights and benefits in the name of market fitness and responsibility, while large corporations and the wealthy benefited handsomely.

As a political rationality, dataism builds on both of these threads by pursuing a project of total measurement in a neoliberal fashion—that is, by allocating rights and benefits to citizens and organizations according to (questionable) estimates of moral desert, and by re-assembling a legible society from the bottom up. Weakened by decades of anti-government ideology and concomitantly eroded capacity, privatization, and symbolic degradation, Western states have determined to manage social problems as they bubble up into crises rather than affirmatively seeking to intervene in their causes. The dataist state sets its sights on an expanse of emergent opportunities and threats. Its focus is not on control or competition, but on “readiness.” Its object is neither the population nor a putative homo economicus, but (as Gilles Deleuze put it) “dividuals,” that is, discrete slices of people and things (e.g. hospital visits, police stops, commuting trips). Under dataism, a well-governed society is one where events (not persons) are aligned to the state’s models and predictions, no matter how disorderly in high modernist terms or how irrational in neoliberal terms….(More)”.

Covid-19 is reshaping collective intelligence


Chris Zollinger at Diplomatic Courier: “What a difference a year makes. A survey in April showed that almost 40% of people in the EU had switched to remote work, while estimates in the U.S. range from 30-50%. The video conference has become a staple of our daily working lives in a way that would have been inconceivable 12 months ago, while virtual collaboration tools have become ubiquitous.  

Given the straightened economic climate, it is unsurprising that many businesses see the situation as an opportunity to permanently reduce their cost base. Facebook, for example, has announced that it expects half of its global workforce to work remotely within the next five to ten years, with Twitter, Barclays and Mondelez International making similar moves. On a purely financial level, this seems like a win-win for everyone concerned: employers can save on the capital and operational costs of providing office space, while employees can save the time and money that it would have cost to commute.

However, if we want to move beyond mere economic survival towards recovery and growth, we need to be more ambitious in our thinking. Rather than merely cutting costs, we now have the chance to drive greater innovation and productivity by building more flexible, remote teams. In addition to the cost and time savings associated with remote work, companies now have an opportunity to shift the focus of their recruitment to new geographic areas and hire talented new employees without the need for them to physically relocate. In this way, they can form purpose-built teams to solve specific tasks over a defined time period….(More)”.

Taming Complexity


Martin Reeves , Simon Levin , Thomas Fink and Ania Levina at Harvard Business Review: “….“Complexity” is one of the most frequently used terms in business but also one of the most ambiguous. Even in the sciences it has numerous definitions. For our purposes, we’ll define it as a large number of different elements (such as specific technologies, raw materials, products, people, and organizational units) that have many different connections to one another. Both qualities can be a source of advantage or disadvantage, depending on how they’re managed.

Let’s look at their strengths. To begin with, having many different elements increases the resilience of a system. A company that relies on just a few technologies, products, and processes—or that is staffed with people who have very similar backgrounds and perspectives—doesn’t have many ways to react to unforeseen opportunities and threats. What’s more, the redundancy and duplication that also characterize complex systems typically give them more buffering capacity and fallback options.

Ecosystems with a diversity of elements benefit from adaptability. In biology, genetic diversity is the grist for natural selection, nature’s learning mechanism. In business, as environments shift, sustained performance requires new offerings and capabilities—which can be created by recombining existing elements in fresh ways. For example, the fashion retailer Zara introduces styles (combinations of components) in excess of immediate needs, allowing it to identify the most popular products, create a tailored selection from them, and adapt to fast-changing fashion as a result.

Another advantage that complexity can confer on natural ecosystems is better coordination. That’s because the elements are often highly interconnected. Flocks of birds or herds of animals, for instance, share behavioral protocols that connect the members to one another and enable them to move and act as a group rather than as an uncoordinated collection of individuals. Thus they realize benefits such as collective security and more-effective foraging.

Finally, complexity can confer inimitability. Whereas individual elements may be easily copied, the interrelationships among multiple elements are hard to replicate. A case in point is Apple’s attempt in 2012 to compete with Google Maps. Apple underestimated the complexity of Google’s offering, leading to embarrassing glitches in the initial versions of its map app, which consequently struggled to gain acceptance with consumers. The same is true of a company’s strategy: If its complexity makes it hard to understand, rivals will struggle to imitate it, and the company will benefit….(More)”.

Using Data and Respecting Users


“Three technical and legal approaches that create value from data and foster user trust” by Marshall Van Alstyne and Alisa Dagan Lenart: “Transaction data is like a friendship tie: both parties must respect the relationship and if one party exploits it the relationship sours. As data becomes increasingly valuable, firms must take care not to exploit their users or they will sour their ties. Ethical uses of data cover a spectrum: at one end, using patient data in healthcare to cure patients is little cause for concern. At the other end, selling data to third parties who exploit users is a serious cause for concern. Between these two extremes lies a vast gray area where firms need better ways to frame data risks and rewards in order to make better legal and ethical choices. This column provides a simple framework and threeways to respectfully improve data use….(More)”

Statistical illiteracy isn’t a niche problem. During a pandemic, it can be fatal


Article by Carlo Rovelli: “In the institute where I used to work a few years ago, a rare non-infectious illness hit five colleagues in quick succession. There was a sense of alarm, and a hunt for the cause of the problem. In the past the building had been used as a biology lab, so we thought that there might be some sort of chemical contamination, but nothing was found. The level of apprehension grew. Some looked for work elsewhere.

One evening, at a dinner party, I mentioned these events to a friend who is a mathematician, and he burst out laughing. “There are 400 tiles on the floor of this room; if I throw 100 grains of rice into the air, will I find,” he asked us, “five grains on any one tile?” We replied in the negative: there was only one grain for every four tiles: not enough to have five on a single tile.

We were wrong. We tried numerous times, actually throwing the rice, and there was always a tile with two, three, four, even five or more grains on it. Why? Why would grains “flung randomly” not arrange themselves into good order, equidistant from each other?

Because they land, precisely, by chance, and there are always disorderly grains that fall on tiles where others have already gathered. Suddenly the strange case of the five ill colleagues seemed very different. Five grains of rice falling on the same tile does not mean that the tile possesses some kind of “rice-­attracting” force. Five people falling ill in a workplace did not mean that it must be contaminated. The institute where I worked was part of a university. We, know-­all professors, had fallen into a gross statistical error. We had become convinced that the “above average” number of sick people required an explanation. Some had even gone elsewhere, changing jobs for no good reason.

Life is full of stories such as this. Insufficient understanding of statistics is widespread. The current pandemic has forced us all to engage in probabilistic reasoning, from governments having to recommend behaviour on the basis of statistical predictions, to people estimating the probability of catching the virus while taking part in common activities. Our extensive statistical illiteracy is today particularly dangerous.

We use probabilistic reasoning every day, and most of us have a vague understanding of averages, variability and correlations. But we use them in an approximate fashion, often making errors. Statistics sharpen and refine these notions, giving them a precise definition, allowing us to reliably evaluate, for instance, whether a medicine or a building is dangerous or not.

Society would gain significant advantages if children were taught the fundamental ideas of probability theory and statistics: in simple form in primary school, and in greater depth in secondary school….(More)”.

Airbnb’s Data ‘Portal’ Promises a Better Relationship With Cities


Article by Patrick Sisson: “When startups go public, a big part of the process is opening up their books and being more transparent about their business model. With global short-term rental giant Airbnb moving towards its own IPO, the company has introduced a new product that seeks to address recent safety concerns and answer the data-sharing requests that critics have long claimed make the company a less-than-perfect partner for local leaders. 

The Airbnb City Portal, which launched on Wednesday as a pilot program with 15 global cities and tourism agencies, aims to provide municipal staff with more efficient access to data about listings, including whether or not they’re complying with local laws. Each city, including Buffalo, San Francisco and Seattle, will have access to a new data dashboard as well as a dedicated staffer at Airbnb. Like so many of its sharing economy and Silicon Valley peers, Airbnb has had a contentious, and evolving, relationship with municipalities and local government ever since launching (an especially fraught situation in Europe, as an EU court just ruled in favor of city regulations of the site). 

At a time when so many tech platforms are wrestling, often unsuccessfully, with the need to moderate the behavior of bad actors who use the site, Airbnb’s City Portal is an attempt to “productize” how the home-sharing site works with local government, says Chris Lehane, Airbnb’s senior vice president for global policy and communications. It’s a more useful framework to access information and report violations, he says. And it delivers on the platform’s long-term goals around sharing data, paying taxes and working with cities on regulation. He frames the move as part of a balancing act around the security and safety responsibilities of local governments and a private global company.

The dashboard will also be useful for local tourism officials: It will provide visitor information, including city of origin and demographic information, that helps bureaus better target their advertising and marketing campaigns….(More)”

The State of Digital Democracy Isn’t As Dire As It Seems


Richard Gibson at the Hedgehog Review: “American society is prone, political theorist Langdon Winner wrote in 2005, to “technological euphoria,” each bout of which is inevitably followed by a period of letdown and reassessment. Perhaps in part for this reason, reviewing the history of digital democracy feels like watching the same movie over and over again. Even Winner’s point has that quality: He first made it in the mid-eighties and has repeated it in every decade since. In the same vein, Warren Yoder, longtime director of the Public Policy Center of Mississippi, responded to the Pew survey by arguing that we have reached the inevitable “low point” with digital technology—as “has happened many times in the past with pamphleteers, muckraking newspapers, radio, deregulated television.” (“Things will get better,” Yoder cheekily adds, “just in time for a new generational crisis beginning soon after 2030.”)

So one threat the present techlash poses is to obscure the ways that digital technology in fact serves many of the functions the visionaries imagined. We now take for granted the vast array of “Gov Tech”—meaning internal government digital upgrades—that makes our democracy go. We have become accustomed to the numerous government services that citizens can avail themselves of with a few clicks, a process spearheaded by the Clinton-Gore administration. We forget how revolutionary the “Internet campaign” of Howard Dean was at the 2004 Democratic primaries, establishing the Internet-based model of campaigning that all presidential candidates use to coordinate volunteer efforts and conduct fundraising, in both cases pulling new participants into the democratic process.

An honest assessment of the current state of digital democracy would acknowledge that the good jostles with the bad and the ugly. Social media has become the new hotspot for Rheingold’s “disinformocracy.” The president’s toxic tweeting continues, though Twitter has attempted recently to provide more oversight. At the same time, digital media have played a conspicuous role in the protests following George Floyd’s death, from the phone used to record his murder to the apps and Google docs used by the organizers of protests. The protests, too, have sparked fresh debate about facial recognition software (rightly one of the major concerns in the Pew report), leading Amazon to announce in June that it was “pausing” police use of its facial recognition software for one year. The city of Boston has made a similar move. Senator Sherrod Brown’s Data Accountability and Transparency Act of 2020, now circulating in draft form, would also limit the federal government’s use of “facial surveillance technology.”

We thus need to avoid summary judgments at this still-early date in the ongoing history of digital democracy. In a superb research paper on “The Internet and Engaged Citizenship” commissioned by the American Academy of Arts and Sciences last year, the political scientist David Karpf wisely concludes that the incredible velocity of “Internet Time” befuddles our attempts to state flatly what has or hasn’t happened to democratic practices and participation in our times. The 2016 election has rightly put many observers on guard. Yet there is a danger in living headline-by-headline. We must not forget how volatile the tech scene remains. That fact leads to Karpf’s hopeful conclusion: “The Internet of 2019 is not a finished product. The choices made by technologists, investors, policy-makers, lawyers, and engaged citizens will all shape what the medium becomes next.” The same can be said about digital technology in 2020: The landscape is still evolving….(More)“.

The ambitious effort to piece together America’s fragmented health data


Nicole Wetsman at The Verge: “From the early days of the COVID-19 pandemic, epidemiologist Melissa Haendel knew that the United States was going to have a data problem. There didn’t seem to be a national strategy to control the virus, and cases were springing up in sporadic hotspots around the country. With such a patchwork response, nationwide information about the people who got sick would probably be hard to come by.

Other researchers around the country were pinpointing similar problems. In Seattle, Adam Wilcox, the chief analytics officer at UW Medicine, was reaching out to colleagues. The city was the first US COVID-19 hotspot. “We had 10 times the data, in terms of just raw testing, than other areas,” he says. He wanted to share that data with other hospitals, so they would have that information on hand before COVID-19 cases started to climb in their area. Everyone wanted to get as much data as possible in the hands of as many people as possible, so they could start to understand the virus.

Haendel was in a good position to help make that happen. She’s the chair of the National Center for Data to Health (CD2H), a National Institutes of Health program that works to improve collaboration and data sharing within the medical research community. So one week in March, just after she’d started working from home and pulled her 10th grader out of school, she started trying to figure out how to use existing data-sharing projects to help fight this new disease.

The solution Haendel and CD2H landed on sounds simple: a centralized, anonymous database of health records from people who tested positive for COVID-19. Researchers could use the data to figure out why some people get very sick and others don’t, how conditions like cancer and asthma interact with the disease, and which treatments end up being effective.

But in the United States, building that type of resource isn’t easy. “The US healthcare system is very fragmented,” Haendel says. “And because we have no centralized healthcare, that makes it also the case that we have no centralized healthcare data.” Hospitals, citing privacy concerns, don’t like to give out their patients’ health data. Even if hospitals agree to share, they all use different ways of storing information. At one institution, the classification “female” could go into a record as one, and “male” could go in as two — and at the next, they’d be reversed….(More)”.