OpenStreetMap in Israel and Palestine – ‘Game changer’ or reproducer of contested cartographies?


Christian Bittner in Political Geography: “In Israel and Palestine, map-making practices were always entangled with contradictive spatial identities and imbalanced power resources. Although an Israeli narrative has largely dominated the ‘cartographic battlefield’, the latest chapter of this story has not been written yet: collaborative forms of web 2.0 cartographies have restructured power relations in mapping practices and challenged traditional monopolies on map and spatial data production. Thus, we can expect web 2.0 cartographies to be a ‘game changer’ for cartography in Palestine and Israel.

In this paper, I review this assumption with the popular example of OpenStreetMap (OSM). Following a mixed methods approach, I comparatively analyze the genesis of OSM in Israel and Palestine. Although nationalist motives do not play a significant role on either side, it turns out that the project is dominated by Israeli and international mappers, whereas Palestinians have hardly contributed to OSM. As a result, social fragmentations and imbalances between Israel and Palestine are largely reproduced through OSM data. Discussing the low involvement of Palestinians, I argue that OSM’s ground truth paradigm might be a watershed for participation. Presumably, the project’s data are less meaningful in some local contexts than in others. Moreover, the seemingly apolitical approach to map only ‘facts on the ground’ reaffirms present spatio-social order and thus the power relations behind it. Within a Palestinian narrative, however, many aspects of the factual material space might appear not as neutral physical objects but as results of suppression, in which case, any ‘accurate’ spatial representation, such as OSM, becomes objectionable….(More)”

How the Circle Line rogue train was caught with data


Daniel Sim at the Data.gov.sg Blog: “Singapore’s MRT Circle Line was hit by a spate of mysterious disruptions in recent months, causing much confusion and distress to thousands of commuters.

Like most of my colleagues, I take a train on the Circle Line to my office at one-north every morning. So on November 5, when my team was given the chance to investigate the cause, I volunteered without hesitation.

 From prior investigations by train operator SMRT and the Land Transport Authority (LTA), we already knew that the incidents were caused by some form of signal interference, which led to loss of signals in some trains. The signal loss would trigger the emergency brake safety feature in those trains and cause them to stop randomly along the tracks.

But the incidents — which first happened in August — seemed to occur at random, making it difficult for the investigation team to pinpoint the exact cause.

We were given a dataset compiled by SMRT that contained the following information:

  • Date and time of each incident
  • Location of incident
  • ID of train involved
  • Direction of train…

LTA and SMRT eventually published a joint press release on November 11 to share the findings with the public….

When we first started, my colleagues and I were hoping to find patterns that may be of interest to the cross-agency investigation team, which included many officers at LTA, SMRT and DSTA. The tidy incident logs provided by SMRT and LTA were instrumental in getting us off to a good start, as minimal cleaning up was required before we could import and analyse the data. We were also gratified by the effective follow-up investigations by LTA and DSTA that confirmed the hardware problems on PV46.

From the data science perspective, we were lucky that incidents happened so close to one another. That allowed us to identify both the problem and the culprit in such a short time. If the incidents were more isolated, the zigzag pattern would have been less apparent, and it would have taken us more time — and data — to solve the mystery….(More).”

Technocracy in America: Rise of the Info-State


Book by Parag Khanna: “American democracy just isn’t good enough anymore. A costly election has done more to divide American society than unite it, while trust in government—and democracy itself—is plummeting. But there are better systems out there, and America would be wise to learn from them. In this provocative manifesto, globalization scholar Parag Khanna tours cutting-edge nations from Switzerland to Singapore to reveal the inner workings that allow them that lead the way in managing the volatility of a fast-changing world while delivering superior welfare and prosperity for their citizens.

The ideal form of government for the complex 21st century is what Khanna calls a “direct technocracy,” one led by experts but perpetually consulting the people through a combination of democracy and data. From a seven-member presidency and a restructured cabinet to replacing the Senate with an Assembly of Governors, Technocracy in America is full of sensible proposals that have been proven to work in the world’s most successful societies. Americans have a choice for whom they elect president, but they should not wait any longer to redesign their political system following Khanna’s pragmatic vision….(More)”

The Crowd is Always There: A Marketplace for Crowdsourcing Crisis Response


Presentation by Patrick Meier at the Emergency Social Data Summit organized by the Red Cross …on “Collaborative Crisis Mapping” (the slides are available here): “What I want to expand on is the notion of a “marketplace for crowdsourcing” that I introduced at the Summit. The idea stems from my experience in the field of conflict early warning, the Ushahidi-Haiti deployment and my observations of the Ushahidi-DC and Ushahidi-Russia initiatives.

The crowd is always there. Paid Search & Rescue (SAR) teams and salaried emergency responders aren’t. Nor can they be on the corners of every street, whether that’s in Port-au-Prince, Haiti, Washington DC or Sukkur, Pakistan. But the real first responders, the disaster affected communities, are always there. Moreover, not all communities are equally affected by a crisis. The challenge is to link those who are most affected with those who are less affected (at least until external help arrives).

This is precisely what PIC Net and the Washington Post did when they  partnered to deploy this Ushahidi platform in response to the massive snow storm that paralyzed Washington DC earlier this year. They provided a way for affected residents to map their needs and for those less affected to map the resources they could share to help others. You don’t need to be a professional disaster response professional to help your neighbor dig out their car.

More recently, friends at Global Voices launched the most ambitious crowdsourcing initiative in Russia in response to the massive forest fires. But they didn’t use this Ushahidi platform to map the fires. Instead, they customized the public map so that those who needed help could find those who wanted to help. In effect, they created an online market place to crowdsource crisis response. You don’t need professional certification in disaster response to drive someone’s grandparents to the next town over.

There’s a lot that disaster affected populations can (and already do) to help each other out in times of crisis. What may help is to combine the crowdsourcing of crisis information with what I call crowdfeeding in order to create an efficient market place for crowdsourcing response. By crowdfeeding, I mean taking crowdsourced information and feeding it right back to the crowd. Surely they need that information as much if not more than external, paid responders who won’t get to the scene for hours or days….(More)”

Can we rely on DIY air pollution sensors?


 at the Conversation: “Until recently, measuring air pollution was a task that could be performed only by trained scientists using very sophisticated – and very expensive – equipment. That has changed with the rapid growth of small, inexpensive sensors that can be assembled by almost anyone. But an important question remains: Do these instruments measure what users think they are measuring?

A number of venture capital-backed startup or crowd-funded groups are marketing sensors by configuring a few dollars’ worth of electronics and some intellectual property – mainly software – into aesthetically pleasing packages. The Air Quality Egg, the Tzoa and the Speck sensor are examples of gadgets that are growing in popularity for measuring air pollutants.

These devices make it possible for individuals without specialized training to monitor air quality. As an environmental health researcher, I’m happy to see that people are interested in clean air, especially because air pollution is closely linked with serious health effects. But there are important concerns about how well and how accurately these sensors work.

At their core, these devices rely on inexpensive, and often uncertain, measurement technologies. Someday small sensors costing less than US$100 may replace much more expensive research-grade instruments like those used by government regulators. But that day is likely to be far away.

New territory for a known technology

Pollution sensors that measure air contaminants have been on the market for many years. Passenger cars have sophisticated emission controls that rely on data collected by air sensors inside the vehicles. These inexpensive sensors use well-established chemical and physical methods – typically, electrochemistry or metal oxide resistance – to measure air contaminants in highly polluted conditions, such as inside the exhaust pipe of a passenger vehicle. And this information is used by the vehicle to improve performance.

It turns out that these sensors can work outside of your car too. But they have some important limits….(More)”

Esri, Waze Partnership: A Growing Trend in Sharing Data for the Benefit of All?


Justine Brown at GovTech: “Esri and Waze announced in mid-October that they’re partnering to help local governments alleviate traffic congestion and analyze congestion patterns. Called the Waze Connected Citizens Program, the program — which enables local governments that use the Esri ArcGIS platform to exchange publicly available traffic data with Waze — may represent a growing trend in which citizens and government share data for the benefit of all.

Connecting Esri and Waze data will allow cities to easily share information about the conditions of their roads with drivers, while drivers anonymously report accidents, potholes and other road condition information back to the cities. Local governments can then merge that data into their existing emergency dispatch and street maintenance systems….

Through the Connected Citizen program, Waze shares two main data sets with its government partners: Jams and Alerts….If there’s a major traffic jam in an unusual area, a traffic management center operator might be triggered to examine that area further. For example, Boston recently used Waze jam data to identify a couple of traffic-prone intersections in the Seaport district….Similarly if a Waze user reports a crash, that information shows up on the city’s existing ArcGIS map. City personnel can assess the crash and combine the Waze data with its existing data sets, if desired. The city can then notify emergency response, for example, to address the accident and send out emergency vehicles if necessary….

The Connected Citizen Program could also provide local governments an alternative to IoT investments, because a city can utilize real-time reports from the road rather than investing in sensors and IoT infrastructure. The Kentucky Transportation Cabinet, for instance, uses data from the Connected Citizen Program in several ways, including to monitor and detect automobile accidents on its roadways….(More)”

Data Literacy – What is it and how can we make it happen?


Introduction by Mark Frank, Johanna Walker, Judie Attard, Alan Tygel of Special Issue on Data Literacy of The Journal of Community Informatics: “With the advent of the Internet and particularly Open Data, data literacy (the ability of non-specialists to make use of data) is rapidly becoming an essential life skill comparable to other types of literacy. However, it is still poorly defined and there is much to learn about how best to increase data literacy both amongst children and adults. This issue addresses both the definition of data literacy and current efforts on increasing and sustaining it. A feature of the issue is the range of contributors. While there are important contributions from the UK, Canada and other Western countries, these are complemented by several papers from the Global South where there is an emphasis on grounding data literacy in context and relating it the issues and concerns of communities. (Full Text: PDF)

See also:

Creating an Understanding of Data Literacy for a Data-driven Society by Annika Wolff, Daniel Gooch, Jose J. Cavero Montaner, Umar Rashid, Gerd Kortuem

Data Literacy defined pro populo: To read this article, please provide a little information by David Crusoe

Data literacy conceptions, community capabilities by Paul Matthews

Urban Data in the primary classroom: bringing data literacy to the UK curriculum by Annika Wolff, Jose J Cavero Montaner, Gerd Kortuem

Contributions of Paulo Freire for a Critical Data Literacy: a Popular Education Approach by Alan Freihof Tygel, Rosana Kirsch

DataBasic: Design Principles, Tools and Activities for Data Literacy Learners by Catherine D’Ignazio, Rahul Bhargava

Perceptions of ICT use in rural Brazil: Factors that impact appropriation among marginalized communities by Paola Prado, J. Alejandro Tirado-Alcaraz, Mauro Araújo Câmara

Graphical Perception of Value Distributions: An Evaluation of Non-Expert Viewers’ Data Literacy by Arkaitz Zubiaga, Brian Mac Namee

A Better Reykjavik and a stronger community: The benefits of crowdsourcing and e-democracy


Dyfrig Williams at Medium: “2008 was a difficult time in Iceland. All three of the country’s major privately owned banks went under, which prompted a financial crisis that enveloped the country and even reached local authorities in Wales.

The Better Reykjavik website was launched before the municipal elections and became a hub for online participation.

  • 70,000 people participated out of a population of 120,000
  • 12,000 registered users submitted over 3,300 ideas and5,500 points for and against
  • 257 ideas were formally reviewed, and 165 have been accepted since 2011

As an external not-for-profit website, Better Reykjavik was better able to involve people because it wasn’t perceived to be part of pre-existing political structures.

Elected members

In the run up to the elections, the soon to be Mayor Jón Gnarr championed the platform at every opportunity. This buy-in from a prominent figure was key, as it publicised the site and showed that there was buy-in for the work at the highest level.

How does it work?

The website enables people to have a direct say in the democratic process. The website gives the space for people to propose, debate and rate ways that their community can be improved. Every month the council is obliged to discuss the 10–15 highest rated ideas from the website….(More)

Using open government for climate action


Elizabeth Moses at Eco-Business: “Countries made many national climate commitments as part of the Paris Agreement on climate change, which entered into force earlier this month. Now comes the hard part of implementing those commitments. The public can serve an invaluable watchdog role, holding governments accountable for following through on their targets and making sure climate action happens in a way that’s fair and inclusive.

But first, the climate and open government communities will need to join forces….

Here are four areas where these communities can lean in together to ensure governments follow through on effective climate action:

1) Expand access to climate data and information.

Open government and climate NGOs and local communities can expand the use of traditional transparency tools and processes such as Freedom of Information (FOI) laws, transparent budgeting, open data policies and public procurement to enhance open information on climate mitigation, adaptation and finance.

For example, Transparencia Mexicana used Mexico’s Freedom of Information Law to collect data to map climate finance actors and the flow of finance in the country. This allows them to make specific recommendations on how to safeguard climate funds against corruption and ensure the money translates into real action on the ground….

2) Promote inclusive and participatory climate policy development.

Civil society and community groups already play a crucial role in advocating for climate action and improving climate governance at the national and local levels, especially when it comes to safeguarding poor and vulnerable people, who often lack political voice….

3) Take legal action for stronger accountability.

Accountability at a national level can only be achieved if grievance mechanisms are in place to address a lack of transparency or public participation, or address the impact of projects and policies on individuals and communities.

Civil society groups and individuals can use legal actions like climate litigation, petitions, administrative policy challenges and court cases at the national, regional or international levels to hold governments and businesses accountable for failing to effectively act on climate change….

4) Create new spaces for advocacy.

Bringing the climate and open government movements together allows civil society to tap new forums for securing momentum around climate policy implementation. For example, many civil society NGOs are highlighting the important connections between a strong Governance Goal 16 under the 2030 Agenda for Sustainable Development, and strong water quality and climate change policies….(More)”

How to Hold Algorithms Accountable


Nicholas Diakopoulos and Sorelle Friedler at MIT Technology Review:  Algorithms are now used throughout the public and private sectors, informing decisions on everything from education and employment to criminal justice. But despite the potential for efficiency gains, algorithms fed by big data can also amplify structural discrimination, produce errors that deny services to individuals, or even seduce an electorate into a false sense of security. Indeed, there is growing awareness that the public should be wary of the societal risks posed by over-reliance on these systems and work to hold them accountable.

Various industry efforts, including a consortium of Silicon Valley behemoths, are beginning to grapple with the ethics of deploying algorithms that can have unanticipated effects on society. Algorithm developers and product managers need new ways to think about, design, and implement algorithmic systems in publicly accountable ways. Over the past several months, we and some colleagues have been trying to address these goals by crafting a set of principles for accountable algorithms….

Accountability implies an obligation to report and justify algorithmic decision-making, and to mitigate any negative social impacts or potential harms. We’ll consider accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness.

Responsibility. For any algorithmic system, there needs to be a person with the authority to deal with its adverse individual or societal effects in a timely fashion. This is not a statement about legal responsibility but, rather, a focus on avenues for redress, public dialogue, and internal authority for change. This could be as straightforward as giving someone on your technical team the internal power and resources to change the system, making sure that person’s contact information is publicly available.

Explainability. Any decisions produced by an algorithmic system should be explainable to the people affected by those decisions. These explanations must be accessible and understandable to the target audience; purely technical descriptions are not appropriate for the general public. Explaining risk assessment scores to defendants and their legal counsel would promote greater understanding and help them challenge apparent mistakes or faulty data. Some machine-learning models are more explainable than others, but just because there’s a fancy neural net involved doesn’t mean that a meaningful explanationcan’t be produced.

Accuracy. Algorithms make mistakes, whether because of data errors in their inputs (garbage in, garbage out) or statistical uncertainty in their outputs. The principle of accuracy suggests that sources of error and uncertainty throughout an algorithm and its data sources need to be identified, logged, and benchmarked. Understanding the nature of errors produced by an algorithmic system can inform mitigation procedures.

Auditability. The principle of auditability states that algorithms should be developed to enable third parties to probe and review the behavior of an algorithm. Enabling algorithms to be monitored, checked, and criticized would lead to more conscious design and course correction in the event of failure. While there may be technical challenges in allowing public auditing while protecting proprietary information, private auditing (as in accounting) could provide some public assurance. Where possible, even limited access (e.g., via an API) would allow the public a valuable chance to audit these socially significant algorithms.

Fairness. As algorithms increasingly make decisions based on historical and societal data, existing biases and historically discriminatory human decisions risk being “baked in” to automated decisions. All algorithms making decisions about individuals should be evaluated for discriminatory effects. The results of the evaluation and the criteria used should be publicly released and explained….(More)”