The Automated Administrative State


Paper by Danielle Citron and Ryan Calo: “The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts…(More)”.

This tech tells cities when floods are coming–and what they will destroy


Ben Paynter at FastCompany: “Several years ago, one of the eventual founders of One Concern nearly died in a tragic flood. Today, the company specializes in using artificial intelligence to predict how natural disasters are unfolding in real time on a city-block-level basis, in order to help disaster responders save as many lives as possible….

To fix that, One Concern debuted Flood Concern in late 2018. It creates map-based visualizations of where water surges may hit hardest, up to five days ahead of an impending storm. For cities, that includes not just time-lapse breakdowns of how the water will rise, how fast it could move, and what direction it will be flowing, but also what structures will get swamped or washed away, and how differing mitigation efforts–from levy building to dam releases–will impact each scenario. It’s the winner of Fast Company’s 2019 World Changing Ideas Awards in the AI and Data category.

[Image: One Concern]

So far, Flood Concern has been retroactively tested against events like Hurricane Harvey to show that it could have predicted what areas would be most impacted well ahead of the storm. The company, which was founded in Silicon Valley in 2015, started with one of that region’s pressing threats: earthquakes. It’s since earned contracts with cities like San Francisco, Los Angeles, and Cupertino, as well as private insurance companies….

One Concern’s first offering, dubbed Seismic Concern, takes existing information from satellite images and building permits to figure out what kind of ground structures are built on, and what might happen if they started shaking. If a big one hits, the program can extrapolate from the epicenter to suggest the likeliest places for destruction, and then adjust as more data from things like 911 calls and social media gets factored in….(More)”.


The Smart Enough City


Community banner

Open Access Book by Ben Green: “Smart cities, where technology is used to solve every problem, are hailed as futuristic urban utopias. We are promised that apps, algorithms, and artificial intelligence will relieve congestion, restore democracy, prevent crime, and improve public services. In The Smart Enough City, Ben Green warns against seeing the city only through the lens of technology; taking an exclusively technical view of urban life will lead to cities that appear smart but under the surface are rife with injustice and inequality. He proposes instead that cities strive to be “smart enough”: to embrace technology as a powerful tool when used in conjunction with other forms of social change—but not to value technology as an end in itself….(More)”.

Artists as ‘Creative Problem-Solvers’ at City Agencies


Sophie Haigney at The New York Times: “Taja Lindley, a Brooklyn-based interdisciplinary artist and activist, will spend the next year doing an unconventional residency — she’ll be collaborating with the New York City Department of Health and Mental Hygiene, working on a project that deals with unequal birth outcomes and maternal mortality for pregnant and parenting black people in the Bronx.

Ms. Lindley is one of four artists who were selected this year for the City’s Public Artists in Residence program, or PAIR, which is managed by New York City’s Department of Cultural Affairs. The program, which began in 2015, matches artists and public agencies, and the artists are tasked with developing creative projects around social issues.

Ms. Lindley will be working with the Tremont Neighborhood Health Action Center, part of the department of health, in the Bronx. “People who are black are met with skepticism, minimized and dismissed when they seek health care,” Ms. Lindley said, “and the voices of black people can really shift medical practices and city practices, so I’ll really be centering those voices.” She said that performance, film and storytelling are likely to be incorporated in her project.

The other three artists selected this year are the artist Laura Nova, who will be in residence with the Department for the Aging; the artist Julia Weist, who will be in residence with the Department of Records and Information Services; and the artist Janet Zweig, who will be in residence with the Mayor’s Office of Sustainability. Each will receive $40,000. There is a three-month-long research phase and then the artists will spend a minimum of nine months creating and producing their work….(More)”.

Crowdsourced reports could save lives when the next earthquake hits


Charlotte Jee at MIT Technology Review: “When it comes to earthquakes, every minute counts. Knowing that one has hit—and where—can make the difference between staying inside a building and getting crushed, and running out and staying alive. This kind of timely information can also be vital to first responders.

However, the speed of early warning systems varies from country to country. In Japan  and California, huge networks of sensors and seismic stations can alert citizens to an earthquake. But these networks are expensive to install and maintain. Earthquake-prone countries such as Mexico and Indonesia don’t have such an advanced or widespread system.

A cheap, effective way to help close this gap between countries might be to crowdsource earthquake reports and combine them with traditional detection data from seismic monitoring stations. The approach was described in a paper in Science Advances today.

The crowdsourced reports come from three sources: people submitting information using LastQuake, an app created by the Euro-Mediterranean Seismological Centre; tweets that refer to earthquake-related keywords; and the time and IP address data associated with visits to the EMSC website.

When this method was applied retrospectively to earthquakes that occurred in 2016 and 2017, the crowdsourced detections on their own were 85% accurate. Combining the technique with traditional seismic data raised accuracy to 97%. The crowdsourced system was faster, too. Around 50% of the earthquake locations were found in less than two minutes, a whole minute faster than with data provided only by a traditional seismic network.

When EMSC has identified a suspected earthquake, it sends out alerts via its LastQuake app asking users nearby for more information: images, videos, descriptions of the level of tremors, and so on. This can help assess the level of damage for early responders….(More)”.

Data Can Help Students Maximize Return on Their College Investment


Blog by Jennifer Latson for Arnold Ventures: “When you buy a car, you want to know it will get you where you’re going. Before you invest in a certain model, you check its record. How does it do in crash tests? Does it have a history of breaking down? Are other owners glad they bought it?

Students choosing between college programs can’t do the same kind of homework. Much of the detailed data we demand when we buy a car isn’t available for postsecondary education — data such as how many students find jobs in the fields they studied, what they earn, how much debt they accumulate, and how quickly they repay it — yet choosing a college is a much more important financial decision.

The most promising solution to filling in the gaps, according to data advocates, is the College Transparency Act, which would create a secure, comprehensive national data network with information on college costs, graduation rates, and student career paths — and make this data publicly available. The bill, which will be discussed in Congress this year, has broad support from both Republicans and Democrats in the House and the Senate in part because it includes precautions to protect privacy and secure student data….

The data needed to answer questions about student success already exists but is scattered among various agencies and institutions: the Department of Educationfor data on student loan repayment; the Treasury Department for earnings information; and schools themselves for graduation rates.

“We can’t connect the dots to find out how these programs are serving certain students, and that’s because the Department of Education isn’t allowed to connect all the information these places have already collected,” says Amy Laitinen, director for higher education at New America, a think tank collaborating with IHEP to promote educational transparency.
And until recently, publicly available federal postsecondary data only included full-time students who’d never enrolled in a college program before, ignoring the more than half of the higher ed population made up of students who attend school part time or who transfer from one institution to another….(More)”.

Trustworthy Privacy Indicators: Grades, Labels, Certifications and Dashboards


Paper by Joel R. Reidenberg et al: “Despite numerous groups’ efforts to score, grade, label, and rate the privacy of websites, apps, and network-connected devices, these attempts at privacy indicators have, thus far, not been widely adopted. Privacy policies, however, remain long, complex, and impractical for consumers. Communicating in some short-hand form, synthesized privacy content is now crucial to empower internet users and provide them more meaningful notice, as well as nudge consumers and data processors toward more meaningful privacy. Indeed, on the basis of these needs, the National Institute of Standards and Technology and the Federal Trade Commission in the United States, as well as lawmakers and policymakers in the European Union, have advocated for the development of privacy indicator systems.

Efforts to develop privacy grades, scores, labels, icons, certifications, seals, and dashboards have wrestled with various deficiencies and obstacles for the wide-scale deployment as meaningful and trustworthy privacy indicators. This paper seeks to identify and explain these deficiencies and obstacles that have hampered past and current attempts. With these lessons, the article then offers criteria that will need to be established in law and policy for trustworthy indicators to be successfully deployed and adopted through technological tools. The lack of standardization prevents user-recognizability and dependability in the online marketplace, diminishes the ability to create automated tools for privacy, and reduces incentives for consumers and industry to invest in a privacy indicators. Flawed methods in selection and weighting of privacy evaluation criteria and issues interpreting language that is often ambiguous and vague jeopardize success and reliability when baked into an indicator of privacy protectiveness or invasiveness. Likewise, indicators fall short when those organizations rating or certifying the privacy practices are not objective, trustworthy, and sustainable.

Nonetheless, trustworthy privacy rating systems that are meaningful, accurate, and adoptable can be developed to assure effective and enduring empowerment of consumers. This paper proposes a framework using examples from prior and current attempts to create privacy indicator systems in order to provide a valuable resource for present-day, real world policymaking….(More)”.

Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age


Report by Christina Nemr and William Gangware: “The proliferation of social media platforms has democratized the dissemination and consumption of information, thereby eroding traditional media hierarchies and undercutting claims of authority. In this environment, states and individuals can easily spread disinformation at lightning speed and with serious impact.

Today’s information ecosystem presents significant vulnerabilities that foreign states can exploit, and they revolve around three primary, interconnected elements:

  1. The medium – the platforms on which disinformation flourishes;
  2. the message – what is being conveyed through disinformation; and,
  3. the audience – the consumers of such content.

The problem of disinformation is therefore not one that can be solved through any single solution, whether psychological or technological. An effective response to this challenge requires understanding the converging factors of technology, media, and human behavior.

This interdisciplinary review, commissioned by the United States Department of State’s Global Engagement Center, presents a holistic overview of the disinformation landscape by examining 1) psychological vulnerabilities to disinformation, 2) current foreign state-sponsored disinformation and propaganda efforts both abroad and in the United States, 3) social media companies’ efforts to counter disinformation, and 4) knowledge and technology gaps that remain….(More)”.

Nudging the dead: How behavioural psychology inspired Nova Scotia’s organ donation scheme


Joseph Brean at National Post: “Nova Scotia’s decision to presume people’s consent to donating their organs after death is not just a North American first. It is also the latest example of how deeply behavioural psychology has changed policy debates.

That is a rare achievement for science. Governments used to appeal to people’s sense of reason, religion, civic duty, or fear of consequences. Today, when they want to change how their citizens behave, they use psychological tricks to hack their minds.

Nudge politics, as it came to be known, has been an intellectual hit among wonks and technocrats ever since Daniel Kahneman won the Nobel Prize in 2002 for destroying the belief people make decisions based on good information and reasonable expectations. Not so, he showed. Not even close. Human decision-making is an organic process, all but immune to reason, but strangely susceptible to simple environmental cues, just waiting to be exploited by a clever policymaker….

Organ donation is a natural fit. Nova Scotia’s experiment aims to solve a policy problem by getting people to do what they always tend to do about government requests — nothing.

The cleverness is evident in the N.S. government’s own words, which play on the meaning of “opportunity”: “Every Nova Scotian will have the opportunity to be an organ and tissue donor unless they opt out.” The policy applies to kidneys, pancreas, heart, liver, lungs, small bowel, cornea, sclera, skin, bones, tendons and heart valves.

It is so clever it aims to make progress as people ignore it. The default position is a positive for the policy. It assumes poor pickup. You can opt out of organ donation if you want. Nova Scotia is simply taking the informed gamble that you probably won’t. That is the goal, and it will make for a revealing case study.

Organ donation is an important question, and chronically low donation rates can reasonably be called a crisis. But most people make their personal choice “thoughtlessly,” as Kahneman wrote in the 2011 book Thinking, Fast and Slow.

He referred to European statistics which showed vast differences in organ donation rights between neighbouring and culturally similar countries, such as Sweden and Denmark, or Germany and Austria. The key difference, he noted, was what he called “framing effects,” or how the question was asked….(More)”.

What if You Could Vote for President Like You Rate Uber Drivers?


Essay by Guru Madhavan and Charles Phelps: “…Some experimental studies have begun to offer insights into the benefits of making voting methods—and the very goals of voting—more expressive. In the 2007 French presidential election, for instance, people were offered the chance to participate in an experimental ballot that allowed them to use letter grades to evaluate the candidates just as professors evaluate students. This approach, called the “majority judgment,” provides a clear method to combine those grades into rankings or a final winner. But instead of merely selecting a winner, majority judgment conveys—with a greater degree of expressivity—the voters’ evaluations of their choices. In this experiment, people completed their ballots in about a minute, thus allaying potential concerns that a letter grading system was too complicated to use. What’s more, they seemed more enthusiastic about this method. Scholars Michel Balinski and Rida Laraki, who led this study, point out: “Indeed, one of the most effective arguments for persuading reluctant voters to participate was that the majority judgment allows fuller expression of opinion.”

Additional experiments with more expressive ballots have now been repeated across different countries and elections. According to a 2018 summary of these experiments by social choice theorist Annick Laruelle,  “While ranking all candidates appears to be difficult … participants enjoy the possibility of choosing a grade for each candidate … [and] ballots with three grades are preferred to those … with two grades.” Some participant comments are revealing, stating, “With this ballot we can at last vote with the heart,” or, “Voting with this ballot is a relief.” Voters, according to Laruelle, “Enjoyed the option of voting in favor of several candidates and were especially satisfied of being offered the opportunity to vote against candidates.”…

These opportunities for expression might increase public interest in (and engagement with) democratic decision making, encouraging more thoughtful candidate debates, more substantive election campaigns and advertisements, and richer use of opinion polling to help candidates shape their position statements (once they are aware that the public’s selection process has changed). One could even envision that the basis for funding election campaigns might evolve if funders focused on policy ideas rather than political allegiances and specific candidates. Changes such as these would ideally put the power back in the hands of the people, where it actually belongs in a democracy. These conjectures need to be tested and retested across contexts, ideally through field experiments that leverage research and expertise in engineering, social choice, and political and behavioral sciences.

Standard left-to-right political scales and the way we currently vote do not capture the true complexity of our evolving political identities and preferences. If voting is indeed the true instrument of democracy and much more than a repeated political ritual, it must allow for richer expression. Current methods seem to discourage public participation, the very nucleus of civic life. The essence of civility and democracy is not merely about providing issues and options to vote on but in enabling people to fully express their preferences. For a country founded on choice as its tenet, is it too much to ask for a little bit more choice in how we select our leaders? …(More)”.