Privacy’s Blueprint: The Battle to Control the Design of New Technologies


Book by Woodrow Hartzog: “Every day, Internet users interact with technologies designed to undermine their privacy. Social media apps, surveillance technologies, and the Internet of Things are all built in ways that make it hard to guard personal information. And the law says this is okay because it is up to users to protect themselves—even when the odds are deliberately stacked against them.

In Privacy’s Blueprint, Woodrow Hartzog pushes back against this state of affairs, arguing that the law should require software and hardware makers to respect privacy in the design of their products. Current legal doctrine treats technology as though it were value-neutral: only the user decides whether it functions for good or ill. But this is not so. As Hartzog explains, popular digital tools are designed to expose people and manipulate users into disclosing personal information.

Against the often self-serving optimism of Silicon Valley and the inertia of tech evangelism, Hartzog contends that privacy gains will come from better rules for products, not users. The current model of regulating use fosters exploitation. Privacy’s Blueprint aims to correct this by developing the theoretical underpinnings of a new kind of privacy law responsive to the way people actually perceive and use digital technologies. The law can demand encryption. It can prohibit malicious interfaces that deceive users and leave them vulnerable. It can require safeguards against abuses of biometric surveillance. It can, in short, make the technology itself worthy of our trust….(More)”.

5 Tips for Launching (and Sustaining) a City Behavioral Design Team


Playbook by ideas42: “…To pave the way for other municipalities to start a Behavioral Design Team, we distilled years of rigorously tested results and real-world best practices into an open-source playbook for public servants at all levels of government. The playbook introduces readers to core concepts of behavioral design, indicates why and where a BDT can be effective, lays out the fundamental competencies and structures governments will need to set up a BDT, and provides guidance on how to successfully run one. It also includes several applicable examples from our New York and Chicago teams to illustrate the tangible impact behavioral science can have on citizens and outcomes.

Thinking about starting a BDT? Here are five tips for launching (and sustaining) a city behavioral design team. For more insights, read the full playbook.

Compose your team with care

While there is no exact formula, a well-staffed BDT needs expertise in three key areas: behavioral science, research and evaluation, and public policies and programs. You’ll rarely find all three in one person—hence the need to gather a team of people with complementary skills. Some key things to look for as you assemble your team: background in behavioral economics or social psychology, formal training in impact evaluation and statistics, and experience working in government positions or nonprofits that implement government programs.

Choose an anchor agency

To more quickly build momentum, consider identifying an “anchor” agency. A high profile partner can help you establish credibility and can facilitate interactions with different departments across your government. Having an anchor agency legitimizes the BDT and helps reduce any apprehension among other agencies. The initial projects with the anchor agency will help others understand both what it means to work with the BDT and what kinds of outcomes to expect.

Establish your criteria for selecting projects

Once you get people bought-in and excited about innovating with behavioral science, the possible problems to tackle can seem limitless. Before selecting projects, set up clear criteria for prioritizing which problems need attention the most and which ones are best suited to behavioral solutions. While it is natural for the exact criteria to vary from place to place, in the playbook we share the criteria the New York and Chicago BDTs use to prioritize and determine the viability of potential undertakings that other teams can use as a starting place.

Build buy-in with a mix of project types

If you run only RCTs, which require implementation and data collection, it may be challenging to generate the buy-in and enthusiasm a BDT needs to thrive in its early days. That’s why incorporating some shorter engagements, including projects that are design-only, or pre-post evaluations can help sustain momentum by quickly generating evidence—and demonstrate that your BDT gets results.

Keep learning and growing

Applying behavioral design within government programs is still relatively novel. This open-source playbook provides guidance for starting a BDT, but constant learning and iterating should be expected! As BDTs mature and evolve, they must also become more ambitious in their scope, particularly when the low-hanging-fruit or other more obvious problems that can be helpful for building buy-in and establishing proof-of-concept have been addressed. The long-term goal of any successful BDT is to tackle the most challenging and impactful problems in government programs and policies head-on and use the solutions to help the people who need it most…(More)”

What Is Human-Centric Design?


Zack Quaintance at GovTech: “…Government services, like all services, have historically used some form of design to deploy user-facing components. The design portion of this equation is nothing new. What Olesund says is new, however, is the human-centric component.

“In the past, government services were often designed from the perspective and need of the government institution, not necessarily with the needs or desires of residents or constituents in mind,” said Olesund. “This might lead, for example, to an accumulation of stats and requirements for residents, or utilization of outdated technology because the government institution is locked into a contract.”

Basically, government has never set out to design its services to be clunky or hard to use. These qualities have, however, grown out of the legally complex frameworks that governments must adhere to, which can subsequently result in a failure to prioritize the needs of the people using the services rather than the institution.

Change, however, is underway. Human-centric design is one of the main priorities of the U.S. Digital Service (USDS) and 18F, a pair of organizations created under the Obama administration with missions that largely involve making government services more accessible to the citizenry through efficient use of tech.

Although the needs of state and municipal governments are more localized, the gov tech work done at the federal level by the USDS and 18F has at times served as a benchmark or guidepost for smaller government agencies.

“They both redesign services to make them digital and user-friendly,” Olesund said. “But they also do a lot of work creating frameworks and best practices for other government agencies to adopt in order to achieve some of the broader systemic change.”

One of the most tangible examples of human-centered design at the state or local level can be found at Michigan’s Department of Health and Human Services, which recently worked with the Detroit-based design studio Civillato reduce its paper services application from 40 pages, 18,000-some words and 1,000 questions, down to 18 pages, 3,904 words and 213 questions. Currently, Civilla is working with the nonprofit civic tech group Code for America to help bring the same massive level of human-centered design progress to the state’s digital services.

Other work is underway in San Francisco’s City Hall and within the state of California. A number of cities also have iTeams funded through Bloomberg Philanthropies, and their missions are to innovate in ways that solve ongoing municipal problems, a mission that often requires use of human-centric design….(More)”.

How artificial intelligence is transforming the world


Report by Darrell West and John Allen at Brookings: “Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21st-century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity….(More)

Table of Contents
I. Qualities of artificial intelligence
II. Applications in diverse sectors
III. Policy, regulatory, and ethical issues
IV. Recommendations
V. Conclusion

Lessons from DataRescue: The Limits of Grassroots Climate Change Data Preservation and the Need for Federal Records Law Reform


Essay by Sarah Lamdan at the University of Pennsylvania Law Review: “Shortly after Donald Trump’s victory in the 2016 Presidential election, but before his inauguration, a group of concerned scholars organized in cities and college campuses across the United States, starting with the University of Pennsylvania, to prevent climate change data from disappearing from government websites. The move was led by Michelle Murphy, a scholar who had previously observed the destruction of climate change data and muzzling of government employees in Canadian Prime Minister Stephen Harper’s administration. The “guerrilla archiving” project soon swept the nation, drawing media attention as its volunteers scraped and preserved terabytes of climate change and other environmental data and materials from .gov websites. The archiving project felt urgent and necessary, as the federal government is the largest collector and archive of U.S. environmental data and information.

As it progressed, the guerrilla archiving movement became more defined: two organizations developed, the DataRefuge at the University of Pennsylvania, and the Environmental Data & Governance Initiative (EDGI), which was a national collection of academics and non-profits. These groups co-hosted data gathering sessions called DataRescue events. I joined EDGI to help members work through administrative law concepts and file Freedom of Information Act (FOIA) requests. The day-long archiving events were immensely popular and widely covered by media outlets. Each weekend, hundreds of volunteers would gather to participate in DataRescue events in U.S. cities. I helped organize the New York DataRescue event, which was held less than a month after the initial event in Pennsylvania. We had to turn people away as hundreds of local volunteers lined up to help and dozens more arrived in buses and cars, exceeding the space constraints of NYU’s cavernous MakerSpace engineering facility. Despite the popularity of the project, however, DataRescue’s goals seemed far-fetched: how could thousands of private citizens learn the contours of multitudes of federal environmental information warehouses, gather the data from all of them, and then re-post the materials in a publicly accessible format?…(More)”.

Online gamers control trash collecting water robot


Springwise: “Urban Rivers is a Chicago-based charity focused on cleaning up the city’s rivers and re-wilding bankside habitats. One of their most visible pieces of work is a floating habitat installed in the middle of the river that runs through the city. An immediate problem that arose after installation was the accumulation of trash. At first, the company sent someone out on a kayak every other day to clean the habitat. Yet in less than a day, huge amounts of garbage would again be choking the space. The company’s solution was to create a Trash Task Force. The outcome of the Task Force’s work is the TrashBot, a remote-controlled garbage-collecting robot. The TrashBot allows gamers all over the world to do their bit in cleaning up Chicago’s river.

Anyone interested in playing the cleaning game can sign up via the Urban River website. Future development of the bot will likely focus on wildlife monitoring. Similarly, the end goal of the game will be that no one wants to play because there is no more garbage for collection.

From crowdsourced ocean data gathered by the fins of surfers’ boards to a solar-powered autonomous drone that gathers waste from harbor waters, the health of the world’s waterways is being improved in a number of ways. The surfboard fins use sensors to monitor sea salinity, acidity levels and wave motion. Those are all important coastal ecosystem factors that could be affected by climate change. The water drones are intelligent and use on-board cameras and sensors to learn about their environment and avoid other craft as they collect garbage from rivers, canals and harbors….(More)”.

The use of Facebook by local authorities: a comparative analysis of the USA, UK and Spain


F. Javier MirandaAntonio Chamorro and Sergio Rubio in Electronic Government: “The social networks have increased the ways in which public administrations can actively interact with the public. However, these new means of communication are not always used efficiently to create an open and two-way relationship. The purpose of this study is to analyse the presence on and use of the social network Facebook by the large councils in the USA, UK and Spain. This research adapts Facebook assessment index (FAI) to the field of local authorities. This index assesses three dimensions: popularity, content and interactivity. The results show that there is no relationship between the population of the municipality and the degree of use of Facebook by the council, but there are notable differences depending on the country. By creating this ranking, we are helping those responsible for this management to carry out benchmarking activities in order to improve their communication strategy on the social networks….(More)”.

Obfuscating with transparency


“These approaches…limit the impact of valuable information in developing policies…”

Under the new policy, studies that do not fully meet transparency criteria would be excluded from use in EPA policy development. This proposal follows unsuccessful attempts to enact the Honest and Open New EPA Science Treatment (HONEST) Act and its predecessor, the Secret Science Reform Act. These approaches undervalue many scientific publications and limit the impact of valuable information in developing policies in the areas that the EPA regulates….In developing effective policies, earnest evaluations of facts and fair-minded assessments of the associated uncertainties are foundational. Policy discussions require an assessment of the likelihood that a particular observation is true and examinations of the short- and long-term consequences of potential actions or inactions, including a wide range of different sorts of costs. Those with training in making these judgments with access to as much relevant information as possible are crucial for this process. Of course, policy development requires considerations other than those related to science. Such discussions should follow clear assessment after access to all of the available evidence. The scientific enterprise should stand up against efforts that distort initiatives aimed to improve scientific practice, just to pursue other agendas…(More)”.

Open Smart Cities in Canada: Environmental Scan and Case Studies


Report by Tracey LauriaultRachel Bloom, Carly Livingstone and Jean-Noé Landry: “This executive summary consolidates findings from a smart city environmental scan (E-Scan) and five case studies of smart city initiatives in Canada. The E-Scan entailed compiling and reviewing documents and definitions produced by smart city vendors, think tanks, associations, consulting firms, standards organizations, conferences, civil society organizations, including critical academic literature, government reports, marketing material, specifications and requirements documents. This research was motivated by a desire to identify international shapers of smart cities and to better understand what differentiates a smart city from an Open Smart City….(More)”.

What if a nuke goes off in Washington, D.C.? Simulations of artificial societies help planners cope with the unthinkable


Mitchell Waldrop at Science: “…The point of such models is to avoid describing human affairs from the top down with fixed equations, as is traditionally done in such fields as economics and epidemiology. Instead, outcomes such as a financial crash or the spread of a disease emerge from the bottom up, through the interactions of many individuals, leading to a real-world richness and spontaneity that is otherwise hard to simulate.

That kind of detail is exactly what emergency managers need, says Christopher Barrett, a computer scientist who directs the Biocomplexity Institute at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg, which developed the NPS1 model for the government. The NPS1 model can warn managers, for example, that a power failure at point X might well lead to a surprise traffic jam at point Y. If they decide to deploy mobile cell towers in the early hours of the crisis to restore communications, NPS1 can tell them whether more civilians will take to the roads, or fewer. “Agent-based models are how you get all these pieces sorted out and look at the interactions,” Barrett says.

The downside is that models like NPS1 tend to be big—each of the model’s initial runs kept a 500-microprocessor computing cluster busy for a day and a half—forcing the agents to be relatively simple-minded. “There’s a fundamental trade-off between the complexity of individual agents and the size of the simulation,” says Jonathan Pfautz, who funds agent-based modeling of social behavior as a program manager at the Defense Advanced Research Projects Agency in Arlington, Virginia.

But computers keep getting bigger and more powerful, as do the data sets used to populate and calibrate the models. In fields as diverse as economics, transportation, public health, and urban planning, more and more decision-makers are taking agent-based models seriously. “They’re the most flexible and detailed models out there,” says Ira Longini, who models epidemics at the University of Florida in Gainesville, “which makes them by far the most effective in understanding and directing policy.”

he roots of agent-based modeling go back at least to the 1940s, when computer pioneers such as Alan Turing experimented with locally interacting bits of software to model complex behavior in physics and biology. But the current wave of development didn’t get underway until the mid-1990s….(More)”.