Ethics & Algorithms Toolkit


Toolkit: “Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk.

GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them….We developed this because:

  • We saw a gap. There are many calls to arms and lots of policy papers, one of which was a DataSF research paper, but nothing practitioner-facing with a repeatable, manageable process.
  • We wanted an approach which governments are already familiar with: risk management. By identifing and quantifying levels of risk, we can recommend specific mitigations.. …(More)”.

Urban Science: Putting the “Smart” in Smart Cities


Introduction to Special Issue on Urban Modeling and Simulation by Shade T. Shutters: “Increased use of sensors and social data collection methods have provided cites with unprecedented amounts of data. Yet, data alone is no guarantee that cities will make smarter decisions and many of what we call smart cities would be more accurately described as data-driven cities.

Parallel advances in theory are needed to make sense of those novel data streams and computationally intensive decision support models are needed to guide decision makers through the avalanche of new data. Fortunately, extraordinary increases in computational ability and data availability in the last two decades have led to revolutionary advances in the simulation and modeling of complex systems.

Techniques, such as agent-based modeling and systems dynamic modeling, have taken advantage of these advances to make major contributions to diverse disciplines such as personalized medicine, computational chemistry, social dynamics, or behavioral economics. Urban systems, with dynamic webs of interacting human, institutional, environmental, and physical systems, are particularly suited to the application of these advanced modeling and simulation techniques. Contributions to this special issue highlight the use of such techniques and are particularly timely as an emerging science of cities begins to crystallize….(More)”.

Legitimacy in Global Governance: Sources, Processes, and Consequences


Book edited by Jonas Tallberg, Karin Backstrand, and Jan Aart Scholte: “Legitimacy is central for the capacity of global governance institutions to address problems such as climate change, trade protectionism, and human rights abuses. However, despite legitimacy’s importance for global governance, its workings remain poorly understood. That is the core concern of this volume: to develop an agenda for systematic and comparative research on legitimacy in global governance. In complementary fashion, the chapters address different aspects of the overarching question: whether, why, how, and with what consequences global governance institutions gain, sustain, and lose legitimacy?

The volume makes four specific contributions. First, it argues for a sociological approach to legitimacy, centered on perceptions of legitimate global governance among affected audiences. Second, it moves beyond the traditional focus on states as the principal audience for legitimacy in global governance and considers a full spectrum of actors from governments to citizens. Third, it advocates a comparative approach to the study of legitimacy in global governance, and suggests strategies for comparison across institutions, issue areas, countries, societal groups, and time. Fourth, the volume offers the most comprehensive treatment so far of the sociological legitimacy of global governance, covering three broad analytical themes: (1) sources of legitimacy, (2) processes of legitimation and delegitimation, and (3) consequences of legitimacy…(More)”.

Creative Placemaking and Community Safety: Synthesizing Cross-Cutting Themes


Mark Treskon, Sino Esthappan, Cameron Okeke and Carla Vasquez-Noriega at the Urban Institute: “This report synthesizes findings from four cases where stakeholders are using creative placemaking to improve community safety. It presents cross-cutting themes from these case studies to show how creative placemaking techniques can be used from the conception and design stage through construction and programming, and how they can build community safety by promoting empathy and understanding, influencing law and policy, providing career opportunities, supporting well-being, and advancing the quality of place. It also discusses implementation challenges, and presents evaluative techniques of particular relevance for stakeholders working to understand the effects of these programs….(More)”.

Making Wage Data Work: Creating a Federal Resource for Evidence and Transparency


Christina Pena at the National Skills Coalition: “Administrative data on employment and earnings, commonly referred to as wage data or wage records, can be used to assess the labor market outcomes of workforce, education, and other programs, providing policymakers, administrators, researchers, and the public with valuable information. However, there is no single readily accessible federal source of wage data which covers all workers. Noting the importance of employment and earnings data to decision makers, the Commission on Evidence-Based Policymaking called for the creation of a single federal source of wage data for statistical purposes and evaluation. They recommended three options for further exploration: expanding access to systems that already exist at the U.S. Census Bureau or the U.S. Department of Health and Human Services (HHS), or creating a new database at the U.S. Department of Labor (DOL).

This paper reviews current coverage and allowable uses, as well as federal and state actions required to make each option viable as a single federal source of wage data that can be accessed by government agencies and authorized researchers. Congress and the President, in conjunction with relevant federal and state agencies, should develop one or more of those options to improve wage information for multiple purposes. Although not assessed in the following review, financial as well as privacy and security considerations would influence the viability of each scenario. Moreover, if a system like the Commission-recommended National Secure Data Service for sharing data between agencies comes to fruition, then a wage system might require additional changes to work with the new service….(More)”

Uninformed Consent


Leslie K. John at Harvard Business Review: “…People are bad at making decisions about their private data. They misunderstand both costs and benefits. Moreover, natural human biases interfere with their judgment. And whether by design or accident, major platform companies and data aggregators have structured their products and services to exploit those biases, often in subtle ways.

Impatience. People tend to overvalue immediate costs and benefits and underweight those that will occur in the future. They want $9 today rather than $10 tomorrow. On the internet, this tendency manifests itself in a willingness to reveal personal information for trivial rewards. Free quizzes and surveys are prime examples. …

The endowment effect. In theory people should be willing to pay the same amount to buy a good as they’d demand when selling it. In reality, people typically value a goodless when they have to buy it. A similar dynamic can be seen when people make decisions about privacy….

Illusion of control. People share a misapprehension that they can control chance processes. This explains why, for example, study subjects valued lottery tickets that they had personally selected more than tickets that had been randomly handed to them. People also confuse the superficial trappings of control with real control….

Desire for disclosure. This is not a decision-making bias. Rather, humans have what appears to be an innate desire, or even need, to share with others. After all, that’s how we forge relationships — and we’re inherently social creatures…

False sense of boundaries. In off-line contexts, people naturally understand and comply with social norms about discretion and interpersonal communication. Though we may be tempted to gossip about someone, the norm “don’t talk behind people’s backs” usually checks that urge. Most of us would never tell a trusted confidant our secrets when others are within earshot. And people’s reactions in the moment can make us quickly scale back if we disclose something inappropriate….(More)”.

United Nations accidentally exposed passwords and sensitive information to the whole internet


Micah Lee at The Intercept: “The United Nations accidentally published passwords, internal documents, and technical details about websites when it misconfigured popular project management service Trello, issue tracking app Jira, and office suite Google Docs.

The mistakes made sensitive material available online to anyone with the proper link, rather than only to specific users who should have access. Affected data included credentials for a U.N. file server, the video conferencing system at the U.N.’s language school, and a web development environment for the U.N.’s Office for the Coordination of Humanitarian Affairs. Security researcher Kushagra Pathak discovered the accidental leak and notified the U.N. about what he found a little over a month ago. As of today, much of the material appears to have been taken down.

In an online chat, Pathak said he found the sensitive information by running searches on Google. The searches, in turn, produced public Trello pages, some of which contained links to the public Google Docs and Jira pages.

Trello projects are organized into “boards” that contain lists of tasks called “cards.” Boards can be public or private. After finding one public Trello board run by the U.N., Pathak found additional public U.N. boards by using “tricks like by checking if the users of one Trello board are also active on some other boards and so on.” One U.N. Trello board contained links to an issue tracker hosted on Jira, which itself contained even more sensitive information. Pathak also discovered links to documents hosted on Google Docs and Google Drive that were configured to be accessible to anyone who knew their web addresses. Some of these documents contained passwords….Here is just some of the sensitive information that the U.N. accidentally made accessible to anyone who Googled for it:

  • A social media team promoting the U.N.’s “peace and security” efforts published credentials to access a U.N. remote file access, or FTP, server in a Trello card coordinating promotion of the International Day of United Nations Peacekeepers. It is not clear what information was on the server; Pathak said he did not connect to it.
  • The U.N.’s Language and Communication Programme, which offers language courses at U.N. Headquarters in New York City, published credentials for a Google account and a Vimeo account. The program also exposed, on a publicly visible Trello board, credentials for a test environment for a human resources web app. It also made public a Google Docs spreadsheet, linked from a public Trello board, that included a detailed meeting schedule for 2018, along with passwords to remotely access the program’s video conference system to join these meetings.
  • One public Trello board used by the developers of Humanitarian Response and ReliefWeb, both websites run by the U.N.’s Office for the Coordination of Humanitarian Affairs, included sensitive information like internal task lists and meeting notes. One public card from the board had a PDF, marked “for internal use only,” that contained a map of all U.N. buildings in New York City. …(More)”.

Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet


We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.

Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms — including:

  • Real-time and archived information about targeted political advertising;
  • Clear accountability for the social impact of automated decision-making;
  • Explicit indicators for the presence of non-human accounts in digital media.

Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized — especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:

  • Consumer control over data through stronger rights to access and removal;
  • Transparency for the user of the full extent of data usage and meaningful consent;
  • Stronger enforcement with resources and authority for agency rule-making.

Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:

  • Stronger oversight of mergers and acquisitions;
  • Antitrust reform including new enforcement regimes, levies, and essential services regulation;
  • Robust data portability and interoperability between services.

There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. … Awareness and education are the first steps toward organizing and action to build a new social contract for digital democracy….(More)”

How AI Addresses Unconscious Bias in the Talent Economy


Announcement by Bob Schultz at IBM: “The talent economy is one of the great outcomes of the digital era — and the ability to attract and develop the right talent has become a competitive advantage in most industries. According to a recent IBM study, which surveyed over 2,100 Chief Human Resource Officers, 33 percent of CHROs believe AI will revolutionize the way they do business over the next few years. In that same study, 65 percent of CEOs expect that people skills will have a strong impact on their businesses over the next several years. At IBM, we see AI as a tremendous untapped opportunity to transform the way companies attract, develop, and build the workforce for the decades ahead.

Consider this: The average hiring manager has hundreds of applicants a day for key positions and spends approximately six seconds on each resume. The ability to make the right decision without analytics and AI’s predictive abilities is limited and has the potential to create unconscious bias in hiring.

That is why today, I am pleased to announce the rollout of IBM Watson Recruitment’s Adverse Impact Analysis capability, which identifies instances of bias related to age, gender, race, education, or previous employer by assessing an organization’s historical hiring data and highlighting potential unconscious biases. This capability empowers HR professionals to take action against potentially biased hiring trends — and in the future, choose the most promising candidate based on the merit of their skills and experience alone. This announcement is part of IBM’s largest ever AI toolset release, tailor made for nine industries and professions where AI will play a transformational role….(More)”.

Computers Can Solve Your Problem. You May Not Like The Answer


David Scharfenberg at the Boston Globe: “Years of research have shown that teenagers need their sleep. Yet high schools often start very early in the morning. Starting them later in Boston would require tinkering with elementary and middle school schedules, too — a Gordian knot of logistics, pulled tight by the weight of inertia, that proved impossible to untangle.

Until the computers came along.

Last year, the Boston Public Schools asked MIT graduate students Sébastien Martin and Arthur Delarue to build an algorithm that could do the enormously complicated work of changing start times at dozens of schools — and rerouting the hundreds of buses that serve them….

The algorithm was poised to put Boston on the leading edge of a digital transformation of government. In New York, officials were using a regression analysis tool to focus fire inspections on the most vulnerable buildings. And in Allegheny County, Pa., computers were churning through thousands of health, welfare, and criminal justice records to help identify children at risk of abuse….

While elected officials tend to legislate by anecdote and oversimplify the choices that voters face, algorithms can chew through huge amounts of complicated information. The hope is that they’ll offer solutions we’ve never imagined ­— much as Google Maps, when you’re stuck in traffic, puts you on an alternate route, down streets you’ve never traveled.

Dataphiles say algorithms may even allow us to filter out the human biases that run through our criminal justice, social service, and education systems. And the MIT algorithm offered a small window into that possibility. The data showed that schools in whiter, better-off sections of Boston were more likely to have the school start times that parents prize most — between 8 and 9 a.m. The mere act of redistributing start times, if aimed at solving the sleep deprivation problem and saving money, could bring some racial equity to the system, too.

Or, the whole thing could turn into a political disaster.

District officials expected some pushback when they released the new school schedule on a Thursday night in December, with plans to implement in the fall of 2018. After all, they’d be messing with the schedules of families all over the city.

But no one anticipated the crush of opposition that followed. Angry parents signed an online petition and filled the school committee chamber, turning the plan into one of the biggest crises of Mayor Marty Walsh’s tenure. The city summarily dropped it. The failure would eventually play a role in the superintendent’s resignation.

It was a sobering moment for a public sector increasingly turning to computer scientists for help in solving nagging policy problems. What had gone wrong? Was it a problem with the machine? Or was it a problem with the people — both the bureaucrats charged with introducing the algorithm to the public, and the public itself?…(More)”