Nudging Citizens through Technology in Smart Cities


Sofia Ranchordas in the International Review of Law, Computers & Technology: “In the last decade, several smart cities throughout the world have started employing Internet of Things, big data, and algorithms to nudge citizens to save more water and energy, live healthily, use public transportation, and participate more actively in local affairs. Thus far, the potential and implications of data-driven nudges and behavioral insights in smart cities have remained an overlooked subject in the legal literature. Nevertheless, combining technology with behavioral insights may allow smart cities to nudge citizens more systematically and help these urban centers achieve their sustainability goals and promote civic engagement. For example, in Boston, real-time feedback on driving has increased road safety and in Eindhoven, light sensors have been used to successfully reduce nightlife crime and disturbance. While nudging tends to be well-intended, data-driven nudges raise a number of legal and ethical issues. This article offers a novel and interdisciplinary perspective on nudging which delves into the legal, ethical, and trust implications of collecting and processing large amounts of personal and impersonal data to influence citizens’ behavior in smart cities….(More)”.

Twentieth Century Town Halls: Architecture of Democracy


Book by Jon Stewart: “This is the first book to examine the development of the town hall during the twentieth century and the way in which these civic buildings have responded to the dramatic political, social and architectural changes which took place during the period. Following an overview of the history of the town hall as a building type, it examines the key themes, variations and lessons which emerged during the twentieth century. This is followed by 20 case studies from around the world which include plans, sections and full-colour illustrations. Each of the case studies examines the town hall’s procurement, the selection of its architect and the building design, and critically analyses its success and contribution to the type’s development. The case studies include:

Copenhagen Town Hall, Denmark, Martin Nyrop

Stockholm City Hall, Sweden, Ragnar Ostberg

Hilversum Town Hall, the Netherlands, Willem M. Dudok

Walthamstow Town Hall, Britain, Philip Dalton Hepworth

Oslo Town Hall, Norway, Arnstein Arneberg and Magnus Poulsson

Casa del Fascio, Como, Italy, Guiseppe Terragni

Aarhus Town Hall, Denmark, Arne Jacobsen with Eric Moller

Saynatsalo Town Hall, Finland, Alvar Aalto

Kurashiki City Hall, Japan, Kenzo Tange

Toronto City Hall, Canada, Viljo Revell

Boston City Hall, USA, Kallmann, McKinnell and Knowles

Dallas City Hall, USA, IM Pei

Mississauga City Hall, Canada, Ed Jones and Michael Kirkland

Borgoricco Town Hall, Italy, Aldo Rossi

Reykjavik City Hall, Iceland, Studio Granda

Valdelaguna Town Hall, Spain, Victor Lopez Cotelo and Carlos Puente Fernandez

The Hague City Hall, the Netherlands, Richard Meier

Iragna Town Hall, Switzerland, Raffaele Cavadini

Murcia City Hall, Spain, Jose Rafael Moneo

London City Hall, UK, Norman Foster…(More)”.

Weather Service prepares to launch prediction model many forecasters don’t trust


Jason Samenow in the Washington Post: “In a month, the National Weather Service plans to launch its “next generation” weather prediction model with the aim of “better, more timely forecasts.” But many meteorologists familiar with the model fear it is unreliable.

The introduction of a model that forecasters lack confidence in matters, considering the enormous impact that weather has on the economy, valued at around $485 billion annually.

The Weather Service announced Wednesday that the model, known as the GFS-FV3 (FV3 stands for Finite­ Volume Cubed-Sphere dynamical core), is “tentatively” set to become the United States’ primary forecast model on March 20, pending tests. It is an update to the current version of the GFS (Global Forecast System), popularly known as the American model, which has existed in various forms for more than 30 years….

A concern is that if forecasters cannot rely on the FV3, they will be left to rely only on the European model for their predictions without a credible alternative for comparisons. And they’ll also have to pay large fees for the European model data. Whereas model data from the Weather Service is free, the European Center for Medium-Range Weather Forecasts, which produces the European model, charges for access.

But there is an alternative perspective, which is that forecasters will just need to adjust to the new model and learn to account for its biases. That is, a little short-term pain is worth the long-term potential benefits as the model improves….

The Weather Service’s parent agency, the National Oceanic and Atmospheric Administration, recently entered an agreement with the National Center for Atmospheric Research to increase collaboration between forecasters and researchers in improving forecast modeling.

In addition, President Trump recently signed into law the Weather Research and Forecast Innovation Act Reauthorization, which establishes the NOAA Earth Prediction Innovation Center, aimed at further enhancing prediction capabilities. But even while NOAA develops relationships and infrastructure to improve the Weather Service’s modeling, the question remains whether the FV3 can meet the forecasting needs of the moment. Until the problems identified are addressed, its introduction could represent a step back in U.S. weather prediction despite a well-intended effort to leap forward….(More).

Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice


Paper by Rashida Richardson, Jason Schultz, and Kate Crawford: “Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.

In our research, we examine the implications of using dirty data with predictive policing, and look at jurisdictions that (1) have utilized predictive policing systems and (2) have done so while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices. In particular, we examine the link between unlawful and biased police practices and the data used to train or implement these systems across thirteen case studies. We highlight three of these: (1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices suggests an extremely high risk that dirty data was or will be used in any predictive policing application, and (3) Maricopa County where despite extensive evidence of dirty policing practices, lack of transparency and public accountability surrounding predictive policing inhibits the public from assessing the risks of dirty data within such systems. The implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed, biased, and unlawful predictions which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system. Thus, for any jurisdiction where police have been found to engage in such practices, the use of predictive policing in any context must be treated with skepticism and mechanisms for the public to examine and reject such systems are imperative….(More)”.

Should Libraries Be the Keepers of Their Cities’ Public Data?


Linda Poon at CityLab: “In recent years, dozens of U.S. cities have released pools of public data. It’s an effort to improve transparency and drive innovation, and done well, it can succeed at both: Governments, nonprofits, and app developers alike have eagerly gobbled up that data, hoping to improve everything from road conditions to air quality to food delivery.

But what often gets lost in the conversation is the idea of how public data should be collected, managed, and disseminated so that it serves everyone—rather than just a few residents—and so that people’s privacy and data rights are protected. That’s where librarians come in.

“As far as how private and public data should be handled, there isn’t really a strong model out there,” says Curtis Rogers, communications director for the Urban Library Council (ULC), an association of leading libraries across North America. “So to have the library as the local institution that is the most trusted, and to give them that responsibility, is a whole new paradigm for how data could be handled in a local government.”

In fact, librarians have long been advocates of digital inclusion and literacy. That’s why, last month, ULC launched a new initiative to give public libraries a leading role in a future with artificial intelligence. They kicked it off with a working group meeting in Washington, D.C., where representatives from libraries in cities like Baltimore, Toronto, Toledo, and Milwaukee met to exchange ideas on how to achieve that through education and by taking on a larger role in data governance.

It’s a broad initiative, and Rogers says they are still in the beginning stages of determining what that role will ultimately look like. But the group will discuss how data should be organized and managed, hash out the potential risks of artificial intelligence, and eventually develop a field-wide framework for how libraries can help drive equitable public data policies in cities.

Already, individual libraries are involved with their city’s data. Chattanooga Public Library (which wasn’t part of the working group, but is a member of ULC) began hosting the city’s open data portal in 2014, turning a traditionally print-centered institution into a community data hub. Since then, the portal has added more than 280 data sets and garnered hundreds of thousands of page views, according to a report for the 2018 fiscal year….

The Toronto Public Library is also in a unique position because it may soon sit inside one of North America’s “smartest” cities. Last month, the city’s board of trade published a 17-page report titled “BiblioTech,” calling for the library to oversee data governance for all smart city projects.

It’s a grand example of just how big the potential is for public libraries. Ryan says the proposal remains just that at the moment, and there are no details yet on what such a model would even look like. She adds that they were not involved in drafting the proposal, and were only asked to provide feedback. But the library is willing to entertain the idea.

Such ambitions would be a large undertaking in the U.S., however, especially for smaller libraries that are already understaffed and under-resourced. According to ULC’s survey of its members, only 23 percent of respondents said they have a staff person designated as the AI lead. A little over a quarter said they even have AI-related educational programming, and just 15 percent report being part of any local or national initiative.

Debbie Rabina, a professor of library science at Pratt Institute in New York, also cautions that putting libraries in charge of data governance has to be carefully thought out. It’s one thing for libraries to teach data literacy and privacy, and to help cities disseminate data. But to go further than that—to have libraries collecting and owning data and to have them assessing who can and can’t use the data—can lead to ethical conflicts and unintended consequences that could erode the public’s trust….(More)”.

Democracy Beyond Voting and Protests


Sasha Fisher at Project Syndicate: “For over a decade now, we have witnessed more elections and, simultaneously, less democracy. According to Bloomberg, elections have been occurring more frequently around the world. Yet Freedom House finds that some 110 countries have experienced declines in political and civil rights over the past 13 years.

As democracy declines, so does our sense of community. In the United States, this is evidenced by a looming loneliness epidemicand the rapid disappearance of civic institutions such as churches, eight of which close every day. And though these trends are global in nature, the US exemplifies them in the extreme.

This is no coincidence. As Alexis de Tocqueville pointed out in the 1830s, America’s founders envisioned a country governed not by shared values, but by self-interest. That vision has since defined America’s institutions, and fostered a hyper-individualistic society.

Growing distrust in governing institutions has fueled a rise in authoritarian populist movements around the world. Citizens are demanding individual economic security and retreating into an isolationist mentality. ...

And yet we know that “user engagement” works, as shown by countless studies and human experiences. For example, an evaluation conducted in Uganda found that the more citizens participated in the design of health programs, the more the perception of the health-care system improved. And in Indonesia, direct citizen involvement in government decision-making has led to higher satisfaction with government services....

While the Western world suffers from over-individualization, the most notable governance and economic innovations are taking place in the Global South. In Rwanda, for example, the government has introduced policies to encourage grassroots solutions that strengthen citizens’ sense of community and shared accountability. Through monthly community-service meetings, families and individuals work together to build homes for the needy, fix roads, and pool funds to invest in better farming practices and equipment.

Imagine if over 300 million Americans convened every month for a similar purpose. There would suddenly be billions more citizen hours invested in neighbor-to-neighbor interaction and citizen action.

This was one of the main effects of the Village Savings and Loan Associations that originated in the Democratic Republic of Congo. Within communities, members have access to loans to start small businesses and save for a rainy day. The model works because it leverages neighbor-to-neighbor accountability. Likewise, from Haiti to Liberia to Burundi and beyond, community-based health systems have proven effective precisely because health workers know their neighbors and their needs. Community health workers go from home to home, checking in on pregnant mothers and making sure they are cared for. Each of these solutions uses and strengthens communal accountability through shared engagement – not traditional vertical accountability lines.

If we believe in the democratic principle that governments must be accountable to citizens, we should build systems that hold us accountable to each other – and we must engage beyond elections and protests. We must usher in a new era of community-driven democracy – power must be decentralized and placed in the hands of families and communities.

When we achieve community-driven democracy, we will engage with one another and with our governments – not just on special occasions, but continuously, because our democracy and freedom depend on us….(More)” (See also Index on Trust in Institutions)

7 things we’ve learned about computer algorithms


Aaron Smith at Pew Research Center: “Algorithms are all around us, using massive stores of data and complex analytics to make decisions with often significant impacts on humans – from choosing the content people see on social media to judging whether a person is a good credit risk or job candidate. Pew Research Center released several reports in 2018 that explored the role and meaning of algorithms in people’s lives today. Here are some of the key themes that emerged from that research.

  1. Algorithmically generated content platforms play a prominent role in Americans’ information diets. Sizable shares of U.S. adults now get news on sites like Facebook or YouTube that use algorithms to curate the content they show to their users. A study by the Center found that 81% of YouTube users say they at least occasionally watch the videos suggested by the platform’s recommendation algorithm, and that these recommendations encourage users to watch progressively longer content as they click through the videos suggested by the site.
  2. The inner workings of even the most common algorithms can be confusing to users. Facebook is among the most popular social media platforms, but roughly half of Facebook users – including six-in-ten users ages 50 and older – say they do not understand how the site’s algorithmically generated news feed selects which posts to show them. And around three-quarters of Facebook users are not aware that the site automatically estimates their interests and preferences based on their online behaviors in order to deliver them targeted advertisements and other content.
  3. The public is wary of computer algorithms being used to make decisions with real-world consequences. The public expresses widespread concern about companies and other institutions using computer algorithms in situations with potential impacts on people’s lives. More than half (56%) of U.S. adults think it is unacceptable to use automated criminal risk scores when evaluating people who are up for parole. And 68% think it is unacceptable for companies to collect large quantities of data about individuals for the purposes of offering them deals or other financial incentives. When asked to elaborate about their worries, many feel that these programs violate people’s privacy, are unfair, or simply will not work as well as decisions made by humans….(More)”.

Technology and National Security


Book from the Aspen Strategy Group: “This edition is a collection of papers commissioned for the 2018 Aspen Strategy Group Summer Workshop, a bipartisan meeting of national security experts, academics, private sector leaders, and technologists. The chapters in this volume evaluate the disruptive nature of technological change on the US military, economic power, and democratic governance. They highlight possible avenues for US defense modernization, the impact of disinformation tactics and hybrid warfare on democratic institutions, and the need for a reinvigorated innovation triangle comprised of the US government, academia, and private corporations. The executive summary offers practical recommendations to meet the daunting challenges this technological era imposes….(More)”.

Congress needs your input (but don’t call it crowdsourcing)


Lorelei Kelly at TechCrunch: “As it stands, Congress does not have the technical infrastructure to ingest all this new input in any systematic way. Individual members lack a method to sort and filter signal from noise or trusted credible knowledge from malicious falsehood and hype.

What Congress needs is curation, not just more information

Curation means discovering, gathering and presenting content. This word is commonly thought of as the job of librarians and museums, places we go to find authentic and authoritative knowledge. Similarly, Congress needs methods to sort and filter information as required within the workflow of lawmaking. From personal offices to committees, members and their staff need context and informed judgement based on broadly defined expertise. The input can come from individuals or institutions. It can come from the wisdom of colleagues in Congress or across the federal government. Most importantly, it needs to be rooted in local constituents and it needs to be trusted.

It is not to say that crowdsourcing is unimportant for our governing system. But input methods that include digital must demonstrate informed and accountable deliberative methods over time. Governing is the curation part of democracy. Governing requires public review, understanding of context, explanation and measurements of value for the nation as a whole. We are already thinking about how to create an ethical blockchain. Why not the same attention for our most important democratic institution?

Governing requires trade-offs that elicit emotion and sometimes anger. But as in life, emotions require self-regulation. In Congress, this means compromise and negotiation. In fact, one of the reasons Congress is so stuck is that its own deliberative process has declined at every level. Besides the official committee process stalling out, members have few opportunities to be together as colleagues, and public space is increasingly antagonistic and dangerous.

With so few options, members are left with blunt communications objects like clunky mail management systems and partisan talking points. This means that lawmakers don’t use public input for policy formation as much as to surveil public opinion.

Any path forward to the 21st century must include new methods to (1) curate and hear from the public in a way that informs policy AND (2) incorporate real data into a results-driven process.

While our democracy is facing unprecedented stress, there are bright spots. Congress is again dedicating resources to an in-house technologyassessment capacity. Earlier this month, the new 116th Congress created a Select Committee on the Modernization of Congress. It will be chaired by Rep. Derek Kilmer (D-WA). Then the Open Government Data Actbecame law. This law will potentially scale the level of access to government data to unprecedented levels. It will require that all public-facing federal data must be machine-readable and reusable. This is a move in the right direction, and now comes the hard part.

Marci Harris, the CEO of civic startup Popvox, put it well, “The Foundations for Evidence-Based Policymaking (FEBP) Act, which includes the OPEN Government Data Act, lays groundwork for a more effective, accountable government. To realize the potential of these new resources, Congress will need to hire tech literate staff and incorporate real data and evidence into its oversight and legislative functions.”

In forsaking its own capacity for complex problem solving, Congress has become non-competitive in the creative process that moves society forward. During this same time period, all eyes turned toward Silicon Valley to fill the vacuum. With mass connection platforms and unlimited personal freedom, it seemed direct democracy had arrived. But that’s proved a bust. If we go by current trends, entrusting democracy to Silicon Valley will give us perfect laundry and fewer voting rights. Fixing democracy is a whole-of-nation challenge that Congress must lead.

Finally, we “the crowd” want a more effective governing body that incorporates our experience and perspective into the lawmaking process, not just feel-good form letters thanking us for our input. We also want a political discourse grounded in facts. A “modern” Congress will provide both, and now we have the institutional foundation in place to make it happen….(More)”.

Leveraging and Sharing Data for Urban Flourishing


Testimony by Stefaan Verhulst before New York City Council Committee on Technology and the Commission on Public Information and Communication (COPIC): “We live in challenging times. From climate change to economic inequality, the difficulties confronting New York City, its citizens, and decision-makers are unprecedented in their variety, and also in their complexity and urgency. Our standard policy toolkit increasingly seems stale and ineffective. Existing governance institutions and mechanisms seem outdated and distrusted by large sections of the population.

To tackle today’s problems we need not only new solutions but also new methods for arriving at solutions. Data can play a central role in this task. Access to and the use of data in a trusted and responsible manner is central to meeting the challenges we face and enabling public innovation.

This hearing, called by the Technology Committee and the Commission on Public Information and Communication, is therefore timely and very important. It is my firm belief that rapid progress on developing an effective data sharing framework is among the most important steps our New York City leaders can take to tackle the myriad of 21st challenges....

I am joined today by some of my distinguished NYU colleagues, Prof. Julia Lane and Prof. Julia Stoyanovich, who have worked extensively on the technical and privacy challenges associated with data sharing. I will, therefore, avoid duplicating our testimonies and won’t focus on issues of privacy, trust and how to establish a responsible data sharing infrastructure, while these are central considerations for the type of data-driven approaches I will discuss. I am, of course, happy to elaborate on these topics during the question and answer session.

Instead, I want to focus on four core issues associated with data collaboration. I phrase these issues as answers to four questions. For each of these questions, I also provide a set of recommended actions that this Committee could consider undertaking or studying.

The four core questions are:

  • First, why should NYC care about data and data sharing?
  • Second, if you build a data-sharing framework, will they come?
  • Third, how can we best engage the private sector when it comes to sharing and using their data?
  • And fourth, is technology is the main (or best) answer?…(More)”.