New Orleans Gamifies the City Budget


Kelsey E. Thomas at Next City: “New Orleanians can try their hand at being “mayor for a day” with a new interactive website released by the Committee for a Better New Orleans Wednesday.

The Big Easy Budget Game uses open data from the city to allow players to create their own version of an operating budget. Players are given a digital $602 million, and have to balance the budget — keeping in mind the government’s responsibilities, previous year’s spending and their personal priorities.

Each department in the game has a minimum funding level (players can’t just quit funding public schools if they feel like it), and restricted funding, such as state or federal dollars, is off limits.

CBNO hopes to attract 600 players this year, and plans to compile the data from each player into a crowdsourced meta-budget called “The People’s Budget.” Next fall, the People’s Budget will be released along with the city’s proposed 2017 budget.

Along with the budgeting game, CBNO released a more detailed website, also using the city’s open data, that breaks down the city’s budgeted versus actual spending from 2007 to now and is filterable. The goal is to allow users without big data experience to easily research funding relevant to their neighborhoods.

Many cities have been releasing interactive websites to make their data more accessible to residents. Checkbook NYC updates more than $70 billion in city expenses daily and breaks them down by transaction. Fiscal Focus Pittsburgh is an online visualization tool that outlines revenues and expenses in the city’s budget….(More)”

Website Seeks to Make Government Data Easier to Sift Through


Steve Lohr at the New York Times: “For years, the federal government, states and some cities have enthusiastically made vast troves of data open to the public. Acres of paper records on demographics, public health, traffic patterns, energy consumption, family incomes and many other topics have been digitized and posted on the web.

This abundance of data can be a gold mine for discovery and insights, but finding the nuggets can be arduous, requiring special skills.

A project coming out of the M.I.T. Media Lab on Monday seeks to ease that challenge and to make the value of government data available to a wider audience. The project, called Data USA, bills itself as “the most comprehensive visualization of U.S. public data.” It is free, and its software code is open source, meaning that developers can build custom applications by adding other data.

Cesar A. Hidalgo, an assistant professor of media arts and sciences at the M.I.T. Media Lab who led the development of Data USA, said the website was devised to “transform data into stories.” Those stories are typically presented as graphics, charts and written summaries….Type “New York” into the Data USA search box, and a drop-down menu presents choices — the city, the metropolitan area, the state and other options. Select the city, and the page displays an aerial shot of Manhattan with three basic statistics: population (8.49 million), median household income ($52,996) and median age (35.8).

Lower on the page are six icons for related subject categories, including economy, demographics and education. If you click on demographics, one of the so-called data stories appears, based largely on data from the American Community Survey of the United States Census Bureau.

Using colorful graphics and short sentences, it shows the median age of foreign-born residents of New York (44.7) and of residents born in the United States (28.6); the most common countries of origin for immigrants (the Dominican Republic, China and Mexico); and the percentage of residents who are American citizens (82.8 percent, compared with a national average of 93 percent).

Data USA features a selection of data results on its home page. They include the gender wage gap in Connecticut; the racial breakdown of poverty in Flint, Mich.; the wages of physicians and surgeons across the United States; and the institutions that award the most computer science degrees….(More)

Mapping a flood of new data


Rebecca Lipman at Economist Intelligence Unit Perspectives on “One city tweets to stay dry: From drones to old-fashioned phone calls, data come from many unlikely sources. In a disaster, such as a flood or earthquake, responders will take whatever information they can get to visualise the crisis and best direct their resources. Increasingly, cities prone to natural disasters are learning to better aid their citizens by empowering their local agencies and responders with sophisticated tools to cut through the large volume and velocity of disaster-related data and synthesise actionable information.

Consider the plight of the metro area of Jakarta, Indonesia, home to some 28m people, 13 rivers and 1,100 km of canals. With 40% of the city below sea level (and sinking), and regularly subject to extreme weather events including torrential downpours in monsoon season, Jakarta’s residents face far-too-frequent, life-threatening floods. Despite the unpredictability of flooding conditions, citizens have long taken a passive approach that depended on government entities to manage the response. But the information Jakarta’s responders had on the flooding conditions was patchy at best. So in the last few years, the government began to turn to the local population for help. It helped.

Today, Jakarta’s municipal government is relying on the web-based PetaJakarta.org project and a handful of other crowdsourcing mobile apps such as Qlue and CROP to collect data and respond to floods and other disasters. Through these programmes, crowdsourced, time-sensitive data derived from citizens’ social-media inputs have made it possible for city agencies to more precisely map the locations of rising floods and help the residents at risk. In January 2015, for example, the web-based Peta Jakarta received 5,209 reports on floods via tweets with detailed text and photos. Anytime there’s a flood, Peta Jakarta’s data from the tweets are mapped and updated every minute, and often cross-checked by Jakarta Disaster Management Agency (BPBD) officials through calls with community leaders to assess the information and guide responders.

But in any city Twitter is only one piece of a very large puzzle. …

Even with such life-and-death examples, government agencies remain deeply protective of data because of issues of security, data ownership and citizen privacy. They are also concerned about liability issues if incorrect data lead to an activity that has unsuccessful outcomes. These concerns encumber the combination of crowdsourced data with operational systems of record, and impede the fast progress needed in disaster situations….Download the case study .”

Accountable machines: bureaucratic cybernetics?


Alison Powell at LSE Media Policy Project Blog: “Algorithms are everywhere, or so we are told, and the black boxes of algorithmic decision-making make oversight of processes that regulators and activists argue ought to be transparent more difficult than in the past. But when, and where, and which machines do we wish to make accountable, and for what purpose? In this post I discuss how algorithms discussed by scholars are most commonly those at work on media platforms whose main products are the social networks and attention of individuals. Algorithms, in this case, construct individual identities through patterns of behaviour, and provide the opportunity for finely targeted products and services. While there are serious concerns about, for instance, price discrimination, algorithmic systems for communicating and consuming are, in my view, less inherently problematic than processes that impact on our collective participation and belonging as citizenship. In this second sphere, algorithmic processes – especially machine learning – combine with processes of governance that focus on individual identity performance to profoundly transform how citizenship is understood and undertaken.

Communicating and consuming

In the communications sphere, algorithms are what makes it possible to make money from the web for example through advertising brokerage platforms that help companies bid for ads on major newspaper websites. IP address monitoring, which tracks clicks and web activity, creates detailed consumer profiles and transform the everyday experience of communication into a constantly-updated production of consumer information. This process of personal profiling is at the heart of many of the concerns about algorithmic accountability. The consequence of perpetual production of data by individuals and the increasing capacity to analyse it even when it doesn’t appear to relate has certainly revolutionalised advertising by allowing more precise targeting, but what has it done for areas of public interest?

John Cheney-Lippold identifies how the categories of identity are now developed algorithmically, since a category like gender is not based on self-discloure, but instead on patterns of behaviour that fit with expectations set by previous alignment to a norm. In assessing ‘algorithmic identities’, he notes that these produce identity profiles which are narrower and more behaviour-based than the identities that we perform. This is a result of the fact that many of the systems that inspired the design of algorithmic systems were based on using behaviour and other markers to optimise consumption. Algorithmic identity construction has spread from the world of marketing to the broader world of citizenship – as evidenced by the Citizen Ex experiment shown at the Web We Want Festival in 2015.

Individual consumer-citizens

What’s really at stake is that the expansion of algorithmic assessment of commercially derived big data has extended the frame of the individual consumer into all kinds of other areas of experience. In a supposed ‘age of austerity’ when governments believe it’s important to cut costs, this connects with the view of citizens as primarily consumers of services, and furthermore, with the idea that a citizen is an individual subject whose relation to a state can be disintermediated given enough technology. So, with sensors on your garbage bins you don’t need to even remember to take them out. With pothole reporting platforms like FixMyStreet, a city government can be responsive to an aggregate of individual reports. But what aspects of our citizenship are collective? When, in the algorithmic state, can we expect to be together?

Put another way, is there any algorithmic process to value the long term education, inclusion, and sustenance of a whole community for example through library services?…

Seeing algorithms – machine learning in particular – as supporting decision-making for broad collective benefit rather than as part of ever more specific individual targeting and segmentation might make them more accountable. But more importantly, this would help algorithms support society – not just individual consumers….(More)”

How do they fare? Govt complaints apps compared


 at GovInsider Asia: “Countries across the region have launched complaints apps. But how do they fare? GovInsider takes a look at each how they compare.

#BetterPenang

#BetterPenang was one of the earliest complaints apps in the region, however. It was released in 2013, built by citizens with their own funding, and was later adopted by local authorities to respond to complaints. Officials answer to citizens directly on the app, in some cases responding with photos of how the problem was addressed.

The Mayor of Seberang Perai city in Penang monitors the complaints herself. She has appointed a team to ensure that complaints are responded to.

Downloads: 5,000
Rating: 4.3 out of 5

Interview: Mayor of Seberang Perai

Qlue

Qlue-screenshots

#BetterPenang compares with Jakarta’s similar Qlue app, which was built by a startup and is now being used by the city government.

Qlue has gone a step further with features to keep citizens engaged with the government. It ranks local authorities in Jakarta based on how quickly they respond to reports.

It has a gamified element that awards citizens points for posting complaints and inviting others to use the app. The points can be traded for avatars with “special powers”.

Downloads: 100,000
Rating: 4.2 out of 5

Interview: Inside Jakarta’s Smart City HQ

Cakna

Inspired by #BetterPenang, the Malaysian federal government has launched its own complaints app, which it plans to push to every city in the country.

The features are similar to Penang’s app, but Cakna has broader coverage thanks to federal government backing. Reports are automatically sent to the relevant local authority in the country.

That also makes the app more comprehensive from the government’s perspective. The federal government can now monitor the quality of services across cities on an internal dashboard, while cities can see all of the complaints in their area.

Downloads: 1,000
Rating: 4.2 out of 5

Interview: How we built… Malaysia’s complaints app

OneService

OneService

Singapore’s OneService complaints app asks users to submit additional information, as compared to the other apps. It lets pick the date and time when problem was seen. This could help officials better track and resolve cases.

The government has created a separate unit – the Municipal Services Officer – to get these complaints addressed, particularly in areas that require coordination across agencies. The unit is already linked with at least 10 agencies, and is working to expanding this network to other agencies and town councils….(More)”

Data Mining Reveals the Four Urban Conditions That Create Vibrant City Life


Emerging Technology from the arXiv: “Lack of evidence to city planning has ruined cities all over the world. But data-mining techniques are finally revealing the rules that make cities successful, vibrant places to live. …Back in 1961, the gradual decline of many city centers in the U.S. began to puzzle urban planners and activists alike. One of them, the urban sociologist Jane Jacobs, began a widespread and detailed investigation of the causes and published her conclusions in The Death and Life of Great American Cities, a controversial book that proposed four conditions that are essential for vibrant city life.

Jacobs’s conclusions have become hugely influential. Her ideas have had a significant impact on the development of many modern cities such as Toronto and New York City’s Greenwich Village. However, her ideas have also attracted criticism because of the lack of empirical evidence to back them up, a problem that is widespread in urban planning.
Today, that looks set to change thanks to the work of Marco De Nadai at the University of Trento and a few pals, who have developed a way to gather urban data that they use to test Jacobs’s conditions and how they relate to the vitality of city life. The new approach heralds a new age of city planning in which planners have an objective way of assessing city life and working out how it can be improved.
In her book, Jacobs argues that vibrant activity can only flourish in cities when the physical environment is diverse. This diversity, she says, requires four conditions. The first is that city districts must serve more than two functions so that they attract people with different purposes at different times of the day and night. Second, city blocks must be small with dense intersections that give pedestrians many opportunities to interact. The third condition is that buildings must be diverse in terms of age and form to support a mix of low-rent and high-rent tenants. By contrast, an area with exclusively new buildings can only attract businesses and tenants wealthy enough to support the cost of new building. Finally, a district must have a sufficient density of people and buildings.

While Jacobs’s arguments are persuasive, her critics say there is little evidence to show that these factors are linked with vibrant city life. That changed last year when urban scientists in Seoul, South Korea, published the result of a 10-year study of pedestrian activity in the city at unprecedented resolution. This work successfully tested Jacobs’s ideas for the first time.
However, the data was gathered largely through pedestrian surveys, a process that is time-consuming, costly, and generally impractical for use in most modern cities.
De Nadai and co have come up with a much cheaper and quicker alternative using a new generation of city databases and the way people use social media and mobile phones. The new databases include OpenStreetMap, the collaborative mapping tool; census data, which records populations and building use; land use data, which uses satellite images to classify land use according to various categories; Foursquare data, which records geographic details about personal activity; and mobile-phone records showing the number and frequency of calls in an area.
De Nadai and co gathered this data for six cities in Italy—Rome, Naples, Florence, Bologna, Milan, and Palermo.
Their analysis is straightforward. The team used mobile-phone activity as a measure of urban vitality and land-use records, census data, and Foursquare activity as a measure of urban diversity. Their goal was to see how vitality and diversity are correlated in the cities they studied. The results make for interesting reading….(More)

Technology and politics: The signal and the noise


Special Issue of The Economist: “…The way these candidates are fighting their campaigns,each in his own way, is proof that politics as usual is no longer an option. The internet and the availability of huge piles of data on everyone and everything are transforming the democratic process, just as they are upending many industries. They are becoming a force in all kinds of things,from running election campaigns and organising protest movements to improving public policy and the delivery of services. This special report will argue that, as a result, the relationship between citizens and those who govern them is changing fundamentally.

Incongruous though it may seem, the forces that are now powering the campaign of Mr Trump—as well as that ofBernie Sanders, the surprise candidate on the Democratic side (Hillary Clinton is less of a success online)—were first seen in full cry during the Arab spring in 2011. The revolution in Egypt and other Arab countries was not instigated by Twitter, Facebook and other social-media services, but they certainly help edit gain momentum. “The internet is an intensifier,” says Marc Lynch of GeorgeWashington University, a noted scholar of the protest movements in the region…..

However, this special report will argue that, in the longer term, online crusading and organising will turn out to matter less to politics in the digital age than harnessing those ever-growing piles of data. The internet and related technologies, such as smart phones and cloud computing, make it cheap and easy not only to communicate but also to collect, store and analyse immense quantities of information. This is becoming ever more important in influencing political outcomes.

America’s elections are a case in point. Mr Cruz with his data savvy is merely following in the footsteps of Barack Obama, who won his first presidential term with the clever application of digital know-how. Campaigners are hoovering up more and more digital information about every voting-age citizen and stashing it away in enormous databases.With the aid of complex algorithms, these data allow campaigners to decide, say, who needs to be reminded to make the trip to the polling station and who may be persuaded to vote for a particular candidate.

No hiding place

In the case of protest movements, the waves of collective action leave a big digital footprint. Using ever more sophisticated algorithms, governments can mine these data.That is changing the balance of power. In the event of another Arab spring, autocrats would not be caught off guard again because they are now able to monitor protests and intervene when they consider it necessary. They can also identify and neutralise the most influential activists. Governments that were digitally blind when the internet first took off in the mid-1990s now have both a telescope and a microscope.

But data are not just changing campaigns and political movements; they affect how policy is made and public services are offered. This is most visible at local-government level. Cities have begun to use them for everything from smoothing traffic flows to identifying fire hazards. Having all this information at their fingertips is bound to change the way these bureaucracies work, and how they interact with citizens. This will not only make cities more efficient, but provide them with data and tools that could help them involve their citizens more.

This report will look at electoral campaigns, protest movements and local government in turn. Readers will note that most of the examples quoted are American and that most of the people quoted are academics. That is because the study of the interrelationship between data and politics is relatively new and most developed in America. But it is beginning to spill out from the ivory towers, and is gradually spreading to other countries.

The growing role of technology in politics raises many questions. How much of a difference, for instance, do digitally enabled protest surges really make? Many seem to emerge from nowhere, then crash almost as suddenly, defeated by hard political realities and entrenched institutions. The Arab spring uprising in Egypt is one example. Once the incumbent president, Hosni Mubarak, was toppled, the coalition that brought him down fell apart, leaving the stage to the old powers, first the Muslim Brotherhood and then the armed forces.

In party politics, some worry that the digital targeting of voters might end up reducing the democratic process to a marketing exercise. Ever more data and better algorithms, they fret, could lead politicians to ignore those unlikely to vote for them. And in cities it is no tclear that more data will ensure that citizens become more engaged….(More)

See also:

“Streetfight” by Janette Sadik-Khan


Review by Amrita Gupta in Policy Innovations of the book Streetfight: “Janette Sadik-Khan was New York City’s transportation commissioner from 2007-2013. Under her watch, the city’s streets were reimagined and redesigned to include more than 60 plazas and 400 miles of bike lanes—radical changes designed to improve traffic and make spaces safer for everybody. Over seven years, New York City underwent a transportation transformation, played out avenue by avenue, block by block. Times Square went from being the city’s worst traffic nightmare to a two-and-a-half acre outdoor sitting area in 2009. Citi Bike, arguably her biggest success, is the nation’s largest bike share system, and the city’s first new transit system in over half a century. In Streetfight, Sadik-Khan breaks down her achievements into replicable ideas for urban planners and traffic engineers everywhere, and she also reminds us that the fight isn’t over. As part of Bloomberg Associates, she now takes the lessons from New York City’s streets to metropolises around the world.

The old order vs the new order

The crux of the problem, she explains, is that until recently, cities of the future were thought to be cities built for cars, not cities that encouraged human activity on the street.

Understanding what city-building used to mean is key to understanding how our cities are failing us today. Sadik-Khan offers a quick recap of New York City through recent decades. The historical lesson holds; in many ways, cities in every continent grew along a similar trajectory. Streets were designed to keep traffic moving, but not to support the life alongside it. The old order—which Sadik-Khan writes is typified by Robert Moses—took the auto-centric view that pedestrians, public transit, and bike riders were all hindrances in the path of cars.

Sadik-Khan calls for a more equitable and relevant city, one that prioritizes accessibility and convenience for everybody. Her generation of planners aims to transform roads, tunnels, and rail tracks—the legacy hardware of their predecessors—and repurpose them into public spaces to walk, bike, and play.

The strength of the book lies in just how effectively it dispels the misconceptions that most citizens, and indeed, urban planners, have held onto for decades. There are plenty of surprises in Streetfight.

Sadik-Khan shows that people’s ideas about safety can be obsolete. For instance, bike lanes don’t make accidents more likely, they make the streets safer. The statistics show that bike riders actually protect pedestrians by altering the behavior of drivers. Sadik-Khan states that bike ridership in New York City quadrupled from 2000-2012; and as the number of riders increases, so too does the safety of the street.

The assumption, for instance, that reducing lanes or closing them entirely creates gridlock, is entirely wrong. Sadik-Khan’s interventions in New York City —providing pedestrian space and creating fewer but more orderly lanes for vehicles—actually improved traffic. And she uses taxi GPS data to prove it….

In fact, Sadik-Khan makes the claim that the economic power of sustainable streets is probably one of the strongest arguments for implementing dramatic change. Cities need data—think retail rents, shop sales, travel speeds, vehicle counts—to defend their interventions, and then to measure their effectiveness. Yet, she writes, unfortunately there are few cities anywhere that have access to reliable numbers of this kind….

Sadik-Khan emphasizes time and again that change can happen quickly and affordably. She didn’t have to bulldoze neighborhoods, or build new bridges and highways to transform the transportation network of New York City. Planners can reorder a street without destroying a single building.

Streetfight is a handbook that prioritizes paint, planters, signs, and signals over mega-infrastructure projects. We are told that small-scale interventions can have transformative large-scale impacts. Sadik-Khan’s pocket parks, plazas, pedestrian-friendly road redesigns, and parking-protected bike lanes are all the proof we need. For planners in developing countries, this should serve as both guide and encouragement.

Innovation doesn’t need big dollars behind it to be effective, and most ideas are scalable for cities big and small. What it does need, however, is street smarts. Sadik-Khan makes it clear that the key to getting projects to move ahead is support on the ground, and enough political capital to pave the way. Planners everywhere need to encourage participation, invite ideas, and be more transparent about proposals, she writes. But they also need to be willing to put up a fight….(More)”

States’ using iwaspoisoned.com for outbreak alerts


Dan Flynn at Food Safety News: “The crowdsourcing site iwaspoisoned.com has collected thousands of reports of foodborne illnesses from individuals across the United States since 2009 and is expanding with a custom alert service for state health departments.

“There are now 26 states signed up, allowing government (health) officials and epidemiologists to receive real time, customized alerts for reported foodborne illness incidents,” said iwaspoisoned.com founder Patrick Quade.
Quade said he wanted to make iwaspoisoned.com data more accessible to health departments and experts in each state.

“This real time information provides a wider range of information data to help local agencies better manage food illness outbreaks,” he said. “It also supplements existing reporting channels and serves to corroborate their own reporting systems.”

The Florida Department of Health, Food and Waterborne Disease Program (FWDP) began receiving iwaspoisoned.com alerts beginning in December 2015.

“The FWDP has had an online complaint form for individuals to report food and waterborne illnesses,” a spokesman said. “However, the program has been looking for ways to expand their reach to ensure they are investigating all incidents. Partnering with iwaspoisoned.com was a logical choice for this expansion.”…

Quade established iwaspoisoned.com in New York City seven years ago to give people a place to report their experiences of being sickened by restaurant food. It gives such people a place to report the restaurants, locations, symptoms and other details and permits others to comment on the report….

The crowdsourcing site has played an increasing role in recent nationally known outbreaks, including those associated with Chipotle Mexican Grill in the last half of 2015. For example, CBS News in Los Angeles first reported on the Simi Valley, Calif., norovirus outbreak after noticing that about a dozen Chipotle customers had logged their illness reports on iwaspoisoned.com.

Eventually, health officials confirmed at least 234 norovirus illnesses associated with a Chipotle location in Simi Valley…(More)”

It’s not big data that discriminates – it’s the people that use it


 in the Conversation: “Data can’t be racist or sexist, but the way it is used can help reinforce discrimination. The internet means more data is collected about us than ever before and it is used to make automatic decisions that can hugely affect our lives, from our credit scores to our employment opportunities.

If that data reflects unfair social biases against sensitive attributes, such as our race or gender, the conclusions drawn from that data might also be based on those biases.

But this era of “big data” doesn’t need to to entrench inequality in this way. If we build smarter algorithms to analyse our information and ensure we’re aware of how discrimination and injustice may be at work, we can actually use big data to counter our human prejudices.

This kind of problem can arise when computer models are used to make predictions in areas such as insurance, financial loans and policing. If members of a certain racial group have historically been more likely to default on their loans, or been more likely to be convicted of a crime, then the model can deem these people more risky. That doesn’t necessarily mean that these people actually engage in more criminal behaviour or are worse at managing their money. They may just be disproportionately targeted by police and sub-prime mortgage salesmen.

Excluding sensitive attributes

Data scientist Cathy O’Neil has written about her experience of developing models for homeless services in New York City. The models were used to predict how long homeless clients would be in the system and to match them with appropriate services. She argues that including race in the analysis would have been unethical.

If the data showed white clients were more likely to find a job than black ones, the argument goes, then staff might focus their limited resources on those white clients that would more likely have a positive outcome. While sociological research has unveiled the ways that racial disparities in homelessness and unemployment are the result of unjust discrimination, algorithms can’t tell the difference between just and unjust patterns. And so datasets should exclude characteristics that may be used to reinforce the bias, such as race.

But this simple response isn’t necessarily the answer. For one thing, machine learning algorithms can often infer sensitive attributes from a combination of other, non-sensitive facts. People of a particular race may be more likely to live in a certain area, for example. So excluding those attributes may not be enough to remove the bias….

An enlightened service provider might, upon seeing the results of the analysis, investigate whether and how racism is a barrier to their black clients getting hired. Equipped with this knowledge they could begin to do something about it. For instance, they could ensure that local employers’ hiring practices are fair and provide additional help to those applicants more likely to face discrimination. The moral responsibility lies with those responsible for interpreting and acting on the model, not the model itself.

So the argument that sensitive attributes should be stripped from the datasets we use to train predictive models is too simple. Of course, collecting sensitive data should be carefully regulated because it can easily be misused. But misuse is not inevitable, and in some cases, collecting sensitive attributes could prove absolutely essential in uncovering, predicting, and correcting unjust discrimination. For example, in the case of homeless services discussed above, the city would need to collect data on ethnicity in order to discover potential biases in employment practices….(More)