Discrimination by Design


Lena Groeger at ProPublica: “A few weeks ago, Snapchat released a new photo filter. It appeared alongside many of the other such face-altering filters that have become a signature of the service. But instead of surrounding your face with flower petals or giving you the nose and ears of a Dalmatian, the filter added slanted eyes, puffed cheeks and large front teeth. A number of Snapchat users decried the filter as racist, saying it mimicked a “yellowface” caricature of Asians. The company countered that they meant to represent anime characters and deleted the filter within a few hours.

“Snapchat is the prime example of what happens when you don’t have enough people of color building a product,” wrote Bay Area software engineer Katie Zhu in an essay she wrote about deleting the app and leaving the service. In a tech world that hires mostly white men, the absence of diverse voices means that companies can be blind to design decisions that are hurtful to their customers or discriminatory.A Snapchat spokesperson told ProPublica that the company has recently hired someone to lead their diversity recruiting efforts.

A Snapchat spokesperson told ProPublica that the company has recently hired someone to lead their diversity recruiting efforts.

But this isn’t just Snapchat’s problem. Discriminatory design and decision-making affects all aspects of our lives: from the quality of our health care and education to where we live to what scientific questions we choose to ask. It would be impossible to cover them all, so we’ll focus on the more tangible and visual design that humans interact with every day.

You can’t talk about discriminatory design without mentioning city planner Robert Moses, whose public works projects shaped huge swaths of New York City from the 1930s through the 1960s. The physical design of the environment is a powerful tool when it’s used to exclude and isolate specific groups of people. And Moses’ design choices have had lasting discriminatory effects that are still felt in modern New York.

A notorious example: Moses designed a number of Long Island Parkway overpasses to be so low that buses could not drive under them. This effectively blocked Long Island from the poor and people of color who tend to rely more heavily on public transportation. And the low bridges continue to wreak havoc in other ways: 64 collisions were recorded in 2014 alone (here’s a bad one).

The design of bus systems, railways, and other forms of public transportation has a history riddled with racial tensions and prejudiced policies. In the 1990s the Los Angeles’ Bus Riders Union went to court over the racial inequity they saw in the city’s public transportation system. The Union alleged that L.A.’s Metropolitan Transportation Authority spent “a disproportionately high share of its resources on commuter rail services, whose primary users were wealthy non-minorities, and a disproportionately low share on bus services, whose main patrons were low income and minority residents.” The landmark case was settled through a court-ordered consent decree that placed strict limits on transit funding and forced the MTA to invest over $2 billion in the bus system.

Of course, the design of a neighborhood is more than just infrastructure. Zoning laws and regulations that determine how land is used or what schools children go to have long been used as a tool to segregate communities. All too often, the end result of zoning is that low-income, often predominantly black and Latino communities are isolated from most of the resources and advantages of wealthy white communities….(More)”

Crowdsourced map of safe drinking water


Springwise: “Just over two years ago, in April 2014, city officials in Flint, Michigan decided to save costs by switching the city’s water supply from Lake Huron to the Flint River. Because of the switch, residents of the town and their children were exposed to dangerous levels of lead. Much of the population suffered from the side effects of lead poisoning, including skin lesions, hair loss, depression and anxiety and in severe cases, permanent brain damage. Media attention, although focussed at first, inevitably died down. To avoid future similar disasters, Sean Montgomery, a neuroscientist and the CEO of technology company, Connected Future Labs, set up CitizenSpring.

CitizenSpring is an app which enables individuals to test their water supply using readily available water testing kits. Users hold a test strip underneath running water, hold the strip to a smartphone camera and press the button. The app then reveals the results of the test, also cataloguing the test results and storing them in the cloud in the form of a digital map. Using what Montgomery describes as “computer vision,” the app is able to detect lead levels in a given water source and confirm whether they exceed the Environmental Protection Agency’s “safe” threshold. The idea is that communities can inform themselves about their own and nearby water supplies in order that they can act as guardians of their own health. “It’s an impoverished data problem,” says Montgomery. “We don’t have enough data. By sharing the results of test[s], people can, say, find out if they’re testing a faucet that hasn’t been tested before.”

CitizenSpring narrowly missed its funding target on Kickstarter. However, collective monitoring can work. We have already seen the power of communities harnessed to crowdsource pollution data in the EU and map conflict zones through user-submitted camera footage….(More)”

Against transparency


 at Vox: “…Digital storage is pretty cheap and easy, so maybe the next step in open government is ubiquitous surveillance of public servants paired with open access to the recordings.

As a journalist and an all-around curious person, I can’t deny there’s something appealing about this.

Historians, too, would surely love to know everything that President Obama and his top aides said to one another regarding budget negotiations with John Boehner rather than needing to rely on secondhand news accounts influenced by the inevitable demands of spin. By the same token, historians surely would wish that there were a complete and accurate record of what was said at the Constitutional Convention in 1787 that, instead, famously operated under a policy of anonymous discussions.

But we should be cautioned by James Madison’s opinion that “no Constitution would ever have been adopted by the convention if the debates had been public.”

His view, which seems sensible, is that public or recorded debates would have been simply exercises in position-taking rather than deliberation, with each delegate playing to his base back home rather than working toward a deal.

“Had the members committed themselves publicly at first, they would have afterwards supposed consistency required them to maintain their ground,” Madison wrote, “whereas by secret discussion no man felt himself obliged to retain his opinions any longer than he was satisfied of their propriety and truth, and was open to the force of argument.”

The example comes to me by way of Cass Sunstein, who formerly held a position as a top regulatory czar in Obama’s White House, and who delivered a fascinating talk on the subject of government transparency at a June 2016 Columbia symposium on the occasion of the anniversary of the Freedom of Information Act.

Sunstein asks us to distinguish between disclosure of the government’s outputs and disclosure of the government’s inputs. Output disclosure is something like the text of the Constitution or when the Obama administration had Medicare change decades of practice and begin publishing information about what Medicare pays to hospitals and other health providers.

Input disclosure would be something like the transcript of the debates at the Constitutional Convention or a detailed record of the arguments inside the Obama administration over whether to release the Medicare data. Sunstein’s argument is that it is a mistake to simply conflate the two ideas of disclosure under one broad heading of “transparency” when considerations around the two are very different.

Public officials need to have frank discussions

The fundamental problem with input disclosure is that in addition to serving as a deterrent to misconduct, it serves as a deterrent to frankness and honesty.

There are a lot of things that colleagues might have good reason to say to one another in private that would nonetheless be very damaging if they went viral on Facebook:

  • Healthy brainstorming processes often involve tossing out bad or half-baked ideas in order to stimulate thought and elevate better ones.
  • A realistic survey of options may require a blunt assessment of the strengths and weaknesses of different members of the team or of outside groups that would be insulting if publicized.
  • Policy decisions need to be made with political sustainability in mind, but part of making a politically sustainable policy decision is you don’t come out and say you made the decision with politics in mind.
  • Someone may want to describe an actual or potential problem in vivid terms to spur action, without wanting to provoke public panic or hysteria through public discussion.
  • If a previously embarked-upon course of action isn’t working, you may want to quietly change course rather than publicly admit failure.

Journalists are, of course, interested in learning about all such matters. But it’s precisely because such things are genuinely interesting that making disclosure inevitable is risky.

Ex post facto disclosure of discussions whose participants didn’t realize they would be disclosed would be fascinating and useful. But after a round or two of disclosure, the atmosphere would change. Instead of peeking in on a real decision-making process, you would have every meeting dominated by the question “what will this look like on the home page of Politico?”…(More)”

Situation vacant: technology triathletes wanted


Anne-Marie Slaughter in the Financial Times: “It is time to celebrate a new breed of triathletes, who work in technology. When I was dean in the public affairs school at Princeton, I would tell students to aim to work in the public, private and civic sectors over the course of their careers.

Solving public problems requires collaboration among government, business and civil society. Aspiring problem solvers need the culture and language of all three sectors and to develop a network of contacts in each.

The public problems we face, in the US and globally, require lawyers, economists and issue experts but also technologists. A lack of technologists capable of setting up HealthCare.gov, a website designed to implement the Affordable Care act, led President Barack Obama to create the US Digital Service, which deploys Swat tech teams to address specific problems in government agencies.

But functioning websites that deliver government services effectively are only the most obvious technological need for the public sector.

Government can reinvent how it engages with citizens entirely, for example by personalising public education with digital feedback or training jobseekers. But where to find the talent? The market for engineers, designers and project managers sees big tech companies competing for graduates from the world’s best universities.

Governments can offer only a fraction of those salaries, combined with a rigid work environment, ingrained resistance to innovation and none of the amenities and perks so dear to Silicon Valley .

Government’s comparative advantage, however, is mission and impact, which is precisely what Todd Park sells…Still, demand outstrips supply. ….The goal is to create an ecosystem for public interest technology comparable to that in public interest law. In the latter, a number of American philanthropists created role models, educational opportunities and career paths for aspiring lawyers who want to change the world.

That process began in the 1960s, and today every great law school has a public interest programme with scholarships for the most promising students. Many branches of government take on top law school graduates. Public interest lawyers coming out of government find jobs with think-tanks and advocacy organisations and take up research fellowships, often at the law schools that educated them. When they need to pay the mortgage or send their kids to college, they can work at large law firms with pro bono programmes….We need much more. Every public policy school at a university with a computer science, data science or technology design programme should follow suit. Every think-tank should also become a tech tank. Every non-governmental organisation should have at least one technologist on staff. Every tech company should have a pro bono scheme rewarding public interest work….(More)”

“Data-Driven Policy”: San Francisco just showed us how it should work.


abhi nemani at Medium: “…Auto collisions with bikes (and also pedestrians) poses a real threat to the safety and wellbeing of residents. But more than temporary injuries, auto collisions with bikes and pedestrians can kill people. And it does at an alarming rate. According to the city, “Every year in San Francisco, about 30 people lose their lives and over 200 more are seriously injured while traveling on city streets.”…

Problem -> Data Analysis

The city government, in good fashion, made a commitment to do something about. But in better fashion, they decided to do so in a data-driven way. And they tasked the Department of Public Health in collaboration with theDepartment of Transportation to develop policy. What’s impressive is that instead of some blanket policy or mandate, they opted to study the problem,take a nuanced approach, and put data first.

SF High Injury Network

The SF team ran a series of data-driven analytics to determine the causes of these collisions. They developed TransBase to continuously map and visualize traffic incidents throughout the city. Using this platform, then, they developed the “high injury network” — they key places where most problems happen; or as they put it, “to identify where the most investments in engineering, education and enforcement should be focused to have the biggest impact in reducing fatalities and severe injuries.” Turns out that, just12 percent of intersections result in 70% of major injuries. This is using data to make what might seem like an intractable problem, tractable….

Data Analysis -> Policy

So now what? Well, this month, Mayor Ed Lee signed an executive directive to challenge the city to implement these findings under the banner of“Vision Zero”: a goal of reducing auto/pedestrian/bike collision deaths to zero by 2024….

Fortunately, San Francisco took the next step: they put their data to work.

Policy -> Implementation

This week, the city of San Francisco announced plans to build its first“Protected Intersection”:

“Protected intersections use a simple design concept to make everyone safer.Under this configuration, features like concrete islands placed at the cornersslow turning cars and physically separate people biking and driving. They alsoposition turning drivers at an angle that makes it easier for them to see andyield to people walking and biking crossing their path.”

That’s apparently just the start: plans are underway for other intersections,protected bike lanes, and more. Biking and walking in San Francisco is about to become much safer. (Though maybe not easier: the hills — they’rethe worst.)

***

There is ample talk of “Data-Driven Policy” — indeed, I’ve written about it myself — but too often we get lost in the abstract or theoretical….(More)”

How the Federal Government is thinking about Artificial Intelligence


Mohana Ravindranath at NextGov: “Since May, the White House has been exploring the use of artificial intelligence and machine learning for the public: that is, how the federal government should be investing in the technology to improve its own operations. The technologies, often modeled after the way humans take in, store and use new information, could help researchers find patterns in genetic data or help judges decide sentences for criminals based on their likelihood to end up there again, among other applications. …

Here’s a look at how some federal groups are thinking about the technology:

  • Police data: At a recent White House workshop, Office of Science and Technology Policy Senior Adviser Lynn Overmann said artificial intelligence could help police departments comb through hundreds of thousands of hours of body-worn camera footage, potentially identifying the police officers who are good at de-escalating situations. It also could help cities determine which individuals are likely to end up in jail or prison and officials could rethink programs. For example, if there’s a large overlap between substance abuse and jail time, public health organizations might decide to focus their efforts on helping people reduce their substance abuse to keep them out of jail.
  • Explainable artificial intelligence: The Pentagon’s research and development agency is looking for technology that can explain to analysts how it makes decisions. If people can’t understand how a system works, they’re not likely to use it, according to a broad agency announcement from the Defense Advanced Research Projects Agency. Intelligence analysts who might rely on a computer for recommendations on investigative leads must “understand why the algorithm has recommended certain activity,” as do employees overseeing autonomous drone missions.
  • Weather detection: The Coast Guard recently posted its intent to sole-source a contract for technology that could autonomously gather information about traffic, crosswind, and aircraft emergencies. That technology contains built-in artificial intelligence technology so it can “provide only operational relevant information.”
  • Cybersecurity: The Air Force wants to make cyber defense operations as autonomous as possible, and is looking at artificial intelligence that could potentially identify or block attempts to compromise a system, among others.

While there are endless applications in government, computers won’t completely replace federal employees anytime soon….(More)”

How Tech Giants Are Devising Real Ethics for Artificial Intelligence


For years, science-fiction moviemakers have been making us fear the bad things that artificially intelligent machines might do to their human creators. But for the next decade or two, our biggest concern is more likely to be that robots will take away our jobs or bump into us on the highway.

Now five of the world’s largest tech companies are trying to create a standard of ethics around the creation of artificial intelligence. While science fiction has focused on the existential threat of A.I. to humans,researchers at Google’s parent company, Alphabet, and those from Amazon,Facebook, IBM and Microsoft have been meeting to discuss more tangible issues, such as the impact of A.I. on jobs, transportation and even warfare.

Tech companies have long overpromised what artificially intelligent machines can do. In recent years, however, the A.I. field has made rapid advances in a range of areas, from self-driving cars and machines that understand speech, like Amazon’s Echo device, to a new generation of weapons systems that threaten to automate combat.

The specifics of what the industry group will do or say — even its name —have yet to be hashed out. But the basic intention is clear: to ensure thatA.I. research is focused on benefiting people, not hurting them, according to four people involved in the creation of the industry partnership who are not authorized to speak about it publicly.

The importance of the industry effort is underscored in a report issued onThursday by a Stanford University group funded by Eric Horvitz, a Microsoft researcher who is one of the executives in the industry discussions. The Stanford project, called the One Hundred Year Study onArtificial Intelligence, lays out a plan to produce a detailed report on the impact of A.I. on society every five years for the next century….The Stanford report attempts to define the issues that citizens of a typicalNorth American city will face in computers and robotic systems that mimic human capabilities. The authors explore eight aspects of modern life,including health care, education, entertainment and employment, but specifically do not look at the issue of warfare..(More)”

White House, Transportation Dept. want help using open data to prevent traffic crashes


Samantha Ehlinger in FedScoop: “The Transportation Department is looking for public input on how to better interpret and use data on fatal crashes after 2015 data revealed a startling spike of 7.2 percent more deaths in traffic accidents that year.

Looking for new solutions that could prevent more deaths on the roads, the department released three months earlier than usual the 2015 open dataset about each fatal crash. With it, the department and the White House announced a call to action for people to use the data set as a jumping off point for a dialogue on how to prevent crashes, as well as understand what might be causing the spike.

“What we’re ultimately looking for is getting more people engaged in the data … matching this with other publicly available data, or data that the private sector might be willing to make available, to dive in and to tell these stories,” said Bryan Thomas, communications director for the National Highway Traffic Safety Administration, to FedScoop.

One striking statistic was that “pedestrian and pedalcyclist fatalities increased to a level not seen in 20 years,” according to a DOT press release. …

“We want folks to be engaged directly with our own data scientists, so we can help people through the dataset and help answer their questions as they work their way through, bounce ideas off of us, etc.,” Thomas said. “We really want to be accessible in that way.”

He added that as ideas “come to fruition,” there will be opportunities to present what people have learned.

“It’s a very, very rich data set, there’s a lot of information there,” Thomas said. “Our own ability is, frankly, limited to investigate all of the questions that you might have of it. And so we want to get the public really diving in as well.”…

Here are the questions “worth exploring,” according to the call to action:

  • How might improving economic conditions around the country change how Americans are getting around? What models can we develop to identify communities that might be at a higher risk for fatal crashes?
  • How might climate change increase the risk of fatal crashes in a community?
  • How might we use studies of attitudes toward speeding, distracted driving, and seat belt use to better target marketing and behavioral change campaigns?
  • How might we monitor public health indicators and behavior risk indicators to target communities that might have a high prevalence of behaviors linked with fatal crashes (drinking, drug use/addiction, etc.)? What countermeasures should we create to address these issues?”…(More)”

Make Data Sharing Routine to Prepare for Public Health Emergencies


Jean-Paul Chretien, Caitlin M. Rivers, and Michael A. Johansson in PLOS Medicine: “In February 2016, Wellcome Trust organized a pledge among leading scientific organizations and health agencies encouraging researchers to release data relevant to the Zika outbreak as rapidly and widely as possible [1]. This initiative echoed a September 2015 World Health Organization (WHO) consultation that assessed data sharing during the recent West Africa Ebola outbreak and called on researchers to make data publicly available during public health emergencies [2]. These statements were necessary because the traditional way of communicating research results—publication in peer-reviewed journals, often months or years after data collection—is too slow during an emergency.

The acute health threat of outbreaks provides a strong argument for more complete, quick, and broad sharing of research data during emergencies. But the Ebola and Zika outbreaks suggest that data sharing cannot be limited to emergencies without compromising emergency preparedness. To prepare for future outbreaks, the scientific community should expand data sharing for all health research….

Open data deserves recognition and support as a key component of emergency preparedness. Initiatives to facilitate discovery of datasets and track their use [4042]; provide measures of academic contribution, including data sharing that enables secondary analysis [43]; establish common platforms for sharing and integrating research data [44]; and improve data-sharing capacity in resource-limited areas [45] are critical to improving preparedness and response.

Research sponsors, scholarly journals, and collaborative research networks can leverage these new opportunities with enhanced data-sharing requirements for both nonemergency and emergency settings. A proposal to amend the International Health Regulations with clear codes of practice for data sharing warrants serious consideration [46]. Any new requirements should allow scientists to conduct and communicate the results of secondary analyses, broadening the scope of inquiry and catalyzing discovery. Publication embargo periods, such as one under consideration for genetic sequences of pandemic-potential influenza viruses [47], may lower barriers to data sharing but may also slow the timely use of data for public health.

Integrating open science approaches into routine research should make data sharing more effective during emergencies, but this evolution is more than just practice for emergencies. The cause and context of the next outbreak are unknowable; research that seems routine now may be critical tomorrow. Establishing openness as the standard will help build the scientific foundation needed to contain the next outbreak.

Recent epidemics were surprises—Zika and chikungunya sweeping through the Americas; an Ebola pandemic with more than 10,000 deaths; the emergence of severe acute respiratory syndrome and Middle East respiratory syndrome, and an influenza pandemic (influenza A[H1N1]pdm09) originating in Mexico—and we can be sure there are more surprises to come. Opening all research provides the best chance to accelerate discovery and development that will help during the next surprise….(More)”

The ‘who’ and ‘what’ of #diabetes on Twitter


Mariano Beguerisse-Díaz, Amy K. McLennan, Guillermo Garduño-Hernández, Mauricio Barahona, and Stanley J. Ulijaszek at arXiv: “Social media are being increasingly used for health promotion. Yet the landscape of users and messages in such public fora is not well understood. So far, studies have typically focused either on people suffering from a disease, or on agencies that address it, but have not looked more broadly at all the participants in the debate and discussions. We study the conversation about diabetes on Twitter through the systematic analysis of a large collection of tweets containing the term ‘diabetes’, as well as the interactions between their authors. We address three questions: (1) what themes arise in these messages?; (2) who talks about diabetes and in what capacity?; and (3) which type of users contribute to which themes? To answer these questions, we employ a mixed-methods approach, using techniques from anthropology, network science and information retrieval. We find that diabetes-related tweets fall within broad thematic groups: health information, news, social interaction, and commercial. Humorous messages and messages with references to popular culture appear constantly over time, more than any other type of tweet in this corpus. Top ‘authorities’ are found consistently across time and comprise bloggers, advocacy groups and NGOs related to diabetes, as well as stockmarket-listed companies with no specific diabetes expertise. These authorities fall into seven interest communities in their Twitter follower network. In contrast, the landscape of ‘hubs’ is diffuse and fluid over time. We discuss the implications of our findings for public health professionals and policy makers. Our methods are generally applicable to investigations where similar data are available….(More)”