Cities Take the Lead in Setting Rules Around How AI Is Used

Jackie Snow at the Wall Street Journal: “As cities and states roll out algorithms to help them provide services like policing and traffic management, they are also racing to come up with policies for using this new technology.

AI, at its worst, can disadvantage already marginalized groups, adding to human-driven bias in hiring, policing and other areas. And its decisions can often be opaque—making it difficult to tell how to fix that bias, as well as other problems. (The Wall Street Journal discussed calls for regulation of AI, or at least greater transparency about how the systems work, with three experts.)

Cities are looking at a number of solutions to these problems. Some require disclosure when an AI model is used in decisions, while others mandate audits of algorithms, track where AI causes harm or seek public input before putting new AI systems in place.

Here are some ways cities are redefining how AI will work within their borders and beyond.

Explaining the algorithms: Amsterdam and Helsinki

One of the biggest complaints against AI is that it makes decisions that can’t be explained, which can lead to complaints about arbitrary or even biased results.

To let their citizens know more about the technology already in use in their cities, Amsterdam and Helsinki collaborated on websites that document how each city government uses algorithms to deliver services. The registry includes information on the data sets used to train an algorithm, a description of how an algorithm is used, how public servants use the results, the human oversight involved and how the city checks the technology for problems like bias.

Amsterdam has six algorithms fully explained—with a goal of 50 to 100—on the registry website, including how the city’s automated parking-control and trash-complaint reports work. Helsinki, which is only focusing on the city’s most advanced algorithms, also has six listed on its site, with another 10 to 20 left to put up.

“We needed to assess the risk ourselves,” says Linda van de Fliert, an adviser at Amsterdam’s Chief Technology Office. “And we wanted to show the world that it is possible to be transparent.”…(More)” See also AI Localism: The Responsible Use and Design of Artificial Intelligence at the Local Level