Google’s Waze announces government data exchange program with 10 initial partners


Josh Ong at TheNextWeb blog: “Waze today announced “Connected Citizens,” a new government partnership program that will see both parties exchange data in order to improve traffic conditions.

For the program, Waze will provide real-time anonymized crowdsourced traffic data to government departments in exchange for information on public projects like construction, road sensors, and pre-planned road closures.

The first 10 partners include:

  • Rio de Janeiro, Brazil
  • Barcelona, Spain and the Government of Catalonia
  • Jakarta, Indonesia
  • Tel Aviv, Israel
  • San Jose, Costa Rica
  • Boston, USA
  • State of Florida, USA
  • State of Utah, USA
  • Los Angeles County
  • The New York Police Department (NYPD)

Waze has also signed on five other government partners and has received applications from more than 80 municipal groups. The company ran an initial pilot program in Rio de Janeiro where it partnered with the city’s traffic control center to supplement the department’s sensor data with reports from Waze users.

At an event celebrating the launch, Di-Ann Eisnor, head of Growth at Waze noted that the data exchange will only include public alerts, such as accidents and closures.

We don’t share anything beyond that, such as where individuals are located and who they are,” she said.

Eisnor also made it clear that Waze isn’t selling the data. GPS maker TomTom came under fire several years ago after customers learned that the company had sold their data to police departments to help find the best places to put speed traps.

“We keep [the data] clean by making sure we don’t have a business model around it,” Eisnor added.

Waze will requires that new Connected Citizens partners “prove their dedication to citizen engagement and commit to use Waze data to improve city efficiency.”…”

A Vision for Happier Cities


Post by at the Huffington Post:“…Governments such as Bhutan and Venezuela are creating departments of happiness, and in both the US and UK, ‘nudge’ teams have been set up to focus on behavioral psychology. This gets more interesting when we bring in urban planning and neuroscience research, which shows that community aesthetics are a key contributor to our happiness at the same time positive emotions can change our thoughts, and lead to changes in our behaviors.
It was only after moving to New York City that I realized all my experiences… painting, advising executive boards, creative workshops, statistics and writing books about organizational change…gave me a unique set of tools to create the Dept. of Well Being and start a global social impact initiative, which is powered by public art installations entitled Happy Street Signs™.
New York City got the first Happy Street Signs last November. I used my paintings containing positive phrases like “Honk Less Love More” and “New York Loves You” to manufacture 200 government-specification street signs. They were then installed by a team of fifty volunteers around Manhattan and Brooklyn in 90 minutes. Whilst it was unofficial, the objective was to generate smiles for New Yorkers and then survey reactions. We got clipboards out and asked over 600 New Yorkers if they liked the Happy Street Signs and if they wanted more: 92.5 percent of those people said yes!…”

CityBeat: Visualizing the Social Media Pulse of the City


CityBeat is a an academic research project set to develop an application that sources, monitors and analyzes hyper-local information from multiple social media platforms such as Instagram and Twitter in real time.

This project was led by researchers at the Jacobs Institute at Cornell Tech,  in collaboration with the The New York World (Columbia Journalism School), Rutgers University, NYU, and Columbia University….

If you are interested in the technical details, we have published several papers detailing the process of building CityBeat. Enjoy your read!

Xia C., Schwartz, R., Xie K., Krebs A., Langdon A., Ting J. and Naaman M., CityBeat: Real-time Social Media Visualization of Hyper-local City Data. In Proceedings, WWW 2014, Seoul, Korea, April 2014. [PDF]

Xie K., Xia C., Grinberg N., Schwartz R., and Naaman M., Robust detection of hyper-local events from geotagged social media data. In Proceedings of the 13th Workshop on Multimedia Data Mining in KDD, 2013. [PDF]

Schwartz, R., Naaman M., Matni, Z. (2013) Making Sense of Cities Using Social Media: Requirements for Hyper-Local Data Aggregation Tools. In Proceedings, WCMCW at ICWSM 2013, Boston, USA, July 2013. [PDF]

#OpenGovNow: Open Government and how it benefits you


#OpenGovNow:  “Open Governments are built on two things: information and participation. A government that is open, actively discloses information about what it does with its money and resources in a way that all citizens can understand. Equally important, an Open Government is one that actively involves all citizens to be participants in government decision-making. This two-way relationship between citizens and governments, in which governments and citizens share information with one another and work together, is the foundation of Open Government….

Why should I care?

The water that you drink, the public schools that children go to, the roads that you use every day: governments make those a reality. Governments and what they do affect each and every one of us. How governments operate and how they spend scarce public resources have a direct impact on our everyday lives and the future of our communities. For instance, it is estimated that $9.5 trillion US dollars are spent by governments all over the world through contracts — therefore you have a role to play in making sure that your share in that public money is not lost, stolen, or misused….
The Global Opening Government Survey was conducted as a response to the growing demand to better understand citizens’ views on the current state and the potential impact of openness. Using the innovative “random domain intercept technology,” the survey was based on a brief questionnaire and collected complete responses from over 65,000 web-enabled individuals in the first 61 member countries of the Open Government Partnership (OGP) plus India. The survey methodology, as any other, has its advantages and its limitations. More details about the methodology can be found in the Additional Resources section of this page..

Mapping the Next Frontier of Open Data: Corporate Data Sharing


Stefaan Verhulst at the GovLab (cross-posted at the UN Global Pulse Blog): “When it comes to data, we are living in the Cambrian Age. About ninety percent of the data that exists today has been generated within the last two years. We create 2.5 quintillion bytes of data on a daily basis—equivalent to a “new Google every four days.”
All of this means that we are certain to witness a rapid intensification in the process of “datafication”– already well underway. Use of data will grow increasingly critical. Data will confer strategic advantages; it will become essential to addressing many of our most important social, economic and political challenges.
This explains–at least in large part–why the Open Data movement has grown so rapidly in recent years. More and more, it has become evident that questions surrounding data access and use are emerging as one of the transformational opportunities of our time.
Today, it is estimated that over one million datasets have been made open or public. The vast majority of this open data is government data—information collected by agencies and departments in countries as varied as India, Uganda and the United States. But what of the terabyte after terabyte of data that is collected and stored by corporations? This data is also quite valuable, but it has been harder to access.
The topic of private sector data sharing was the focus of a recent conference organized by the Responsible Data Forum, Data and Society Research Institute and Global Pulse (see event summary). Participants at the conference, which was hosted by The Rockefeller Foundation in New York City, included representatives from a variety of sectors who converged to discuss ways to improve access to private data; the data held by private entities and corporations. The purpose for that access was rooted in a broad recognition that private data has the potential to foster much public good. At the same time, a variety of constraints—notably privacy and security, but also proprietary interests and data protectionism on the part of some companies—hold back this potential.
The framing for issues surrounding sharing private data has been broadly referred to under the rubric of “corporate data philanthropy.” The term refers to an emerging trend whereby companies have started sharing anonymized and aggregated data with third-party users who can then look for patterns or otherwise analyze the data in ways that lead to policy insights and other public good. The term was coined at the World Economic Forum meeting in Davos, in 2011, and has gained wider currency through Global Pulse, a United Nations data project that has popularized the notion of a global “data commons.”
Although still far from prevalent, some examples of corporate data sharing exist….

Help us map the field

A more comprehensive mapping of the field of corporate data sharing would draw on a wide range of case studies and examples to identify opportunities and gaps, and to inspire more corporations to allow access to their data (consider, for instance, the GovLab Open Data 500 mapping for open government data) . From a research point of view, the following questions would be important to ask:

  • What types of data sharing have proven most successful, and which ones least?
  • Who are the users of corporate shared data, and for what purposes?
  • What conditions encourage companies to share, and what are the concerns that prevent sharing?
  • What incentives can be created (economic, regulatory, etc.) to encourage corporate data philanthropy?
  • What differences (if any) exist between shared government data and shared private sector data?
  • What steps need to be taken to minimize potential harms (e.g., to privacy and security) when sharing data?
  • What’s the value created from using shared private data?

We (the GovLab; Global Pulse; and Data & Society) welcome your input to add to this list of questions, or to help us answer them by providing case studies and examples of corporate data philanthropy. Please add your examples below, use our Google Form or email them to us at corporatedata@thegovlab.org”

Bridging the Knowledge Gap: In Search of Expertise


New paper by Beth Simone Noveck, The GovLab, for Democracy: “In the early 2000s, the Air Force struggled with a problem: Pilots and civilians were dying because of unusual soil and dirt conditions in Afghanistan. The soil was getting into the rotors of the Sikorsky UH-60 helicopters and obscuring the view of its pilots—what the military calls a “brownout.” According to the Air Force’s senior design scientist, the manager tasked with solving the problem didn’t know where to turn quickly to get help. As it turns out, the man practically sitting across from him had nine years of experience flying these Black Hawk helicopters in the field, but the manager had no way of knowing that. Civil service titles such as director and assistant director reveal little about skills or experience.
In the fall of 2008, the Air Force sought to fill in these kinds of knowledge gaps. The Air Force Research Laboratory unveiled Aristotle, a searchable internal directory that integrated people’s credentials and experience from existing personnel systems, public databases, and users themselves, thus making it easy to discover quickly who knew and had done what. Near-term budgetary constraints killed Aristotle in 2013, but the project underscored a glaring need in the bureaucracy.
Aristotle was an attempt to solve a challenge faced by every agency and organization: quickly locating expertise to solve a problem. Prior to Aristotle, the DOD had no coordinated mechanism for identifying expertise across 200,000 of its employees. Dr. Alok Das, the senior scientist for design innovation tasked with implementing the system, explained, “We don’t know what we know.”
This is a common situation. The government currently has no systematic way of getting help from all those with relevant expertise, experience, and passion. For every success on Challenge.gov—the federal government’s platform where agencies post open calls to solve problems for a prize—there are a dozen open-call projects that never get seen by those who might have the insight or experience to help. This kind of crowdsourcing is still too ad hoc, infrequent, and unpredictable—in short, too unreliable—for the purposes of policy-making.
Which is why technologies like Aristotle are so exciting. Smart, searchable expert networks offer the potential to lower the costs and speed up the process of finding relevant expertise. Aristotle never reached this stage, but an ideal expert network is a directory capable of including not just experts within the government, but also outside citizens with specialized knowledge. This leads to a dual benefit: accelerating the path to innovative and effective solutions to hard problems while at the same time fostering greater citizen engagement.
Could such an expert-network platform revitalize the regulatory-review process? We might find out soon enough, thanks to the Food and Drug Administration…”

Data Mining Reveals How Social Coding Succeeds (And Fails)


Emerging Technology From the arXiv : “Collaborative software development can be hugely successful or fail spectacularly. An analysis of the metadata associated with these projects is teasing apart the difference….
The process of developing software has undergone huge transformation in the last decade or so. One of the key changes has been the evolution of social coding websites, such as GitHub and BitBucket.
These allow anyone to start a collaborative software project that other developers can contribute to on a voluntary basis. Millions of people have used these sites to build software, sometimes with extraordinary success.
Of course, some projects are more successful than others. And that raises an interesting question: what are the differences between successful and unsuccessful projects on these sites?
Today, we get an answer from Yuya Yoshikawa at the Nara Institute of Science and Technology in Japan and a couple of pals at the NTT Laboratories, also in Japan.  These guys have analysed the characteristics of over 300,000 collaborative software projects on GitHub to tease apart the factors that contribute to success. Their results provide the first insights into social coding success from this kind of data mining.
A social coding project begins when a group of developers outline a project and begin work on it. These are the “internal developers” and have the power to update the software in a process known as a “commit”. The number of commits is a measure of the activity on the project.
External developers can follow the progress of the project by “starring” it, a form of bookmarking on GitHub. The number of stars is a measure of the project’s popularity. These external developers can also request changes, such as additional features and so on, in a process known as a pull request.
Yoshikawa and co begin by downloading the data associated with over 300,000 projects from the GitHub website. This includes the number of internal developers, the number of stars a project receives over time and the number of pull requests it gets.
The team then analyse the effectiveness of the project by calculating factors such as the number of commits per internal team member, the popularity of the project over time, the number of pull requests that are fulfilled and so on.
The results provide a fascinating insight into the nature of social coding. Yoshikawa and co say the number of internal developers on a project plays a significant role in its success. “Projects with larger numbers of internal members have higher activity, popularity and sociality,” they say….
Ref: arxiv.org/abs/1408.6012 : Collaboration on Social Media: Analyzing Successful Projects on Social Coding”

The Age of Intelligent Cities: Smart Environments and Innovation-for-all Strategies


New book by Nicos Komninos:  “This book concludes a trilogy that began with Intelligent Cities: Innovation, Knowledge Systems and digital spaces (Routledge 2002) and Intelligent Cities and Globalisation of Innovation Networks (Routledge 2008). Together these books examine intelligent cities as environments of innovation and collaborative problem-solving. In this final book, the focus is on planning, strategy and governance of intelligent cities.

Divided into three parts, each section elaborates upon complementary aspects of intelligent city strategy and planning. Part I is about the drivers and architectures of the spatial intelligence of cities, while Part II turns to planning processes and discusses top-down and bottom-up planning for intelligent cities. Cities such as Amsterdam, Manchester, Stockholm and Helsinki are examples of cities that have used bottom-up planning through the gradual implementation of successive initiatives for regeneration. On the other hand, Living PlanIT, Neapolis in Cyprus, and Saudi Arabia intelligent cities have started with the top-down approach, setting up urban operating systems and common central platforms. Part III focuses on intelligent city strategies; how cities should manage the drivers of spatial intelligence, create smart environments, mobilise communities, and offer new solutions to address city problems.
Main findings of the book are related to a series of models which capture fundamental aspects of intelligent cities making and operation. These models consider structure, function, planning, strategies toward intelligent environments and a model of governance based on mobilisation of communities, knowledge architectures, and innovation cycles.”

Rethinking Democracy


Dani Rodrik at Project Syndicate: “By many measures, the world has never been more democratic. Virtually every government at least pays lip service to democracy and human rights. Though elections may not be free and fair, massive electoral manipulation is rare and the days when only males, whites, or the rich could vote are long gone. Freedom House’s global surveys show a steady increase from the 1970s in the share of countries that are “free” – a trend that the late Harvard political scientist Samuel Huntington dubbed the “third wave” of democratization….

A true democracy, one that combines majority rule with respect for minority rights, requires two sets of institutions. First, institutions of representation, such as political parties, parliaments, and electoral systems, are needed to elicit popular preferences and turn them into policy action. Second, democracy requires institutions of restraint, such as an independent judiciary and media, to uphold fundamental rights like freedom of speech and prevent governments from abusing their power. Representation without restraint – elections without the rule of law – is a recipe for the tyranny of the majority.

Democracy in this sense – what many call “liberal democracy” – flourished only after the emergence of the nation-state and the popular upheaval and mobilization produced by the Industrial Revolution. So it should come as no surprise that the crisis of liberal democracy that many of its oldest practitioners currently are experiencing is a reflection of the stress under which the nation-state finds itself….

In developing countries, it is more often the institutions of restraint that are failing. Governments that come to power through the ballot box often become corrupt and power-hungry. They replicate the practices of the elitist regimes they replaced, clamping down on the press and civil liberties and emasculating (or capturing) the judiciary. The result has been called “illiberal democracy” or “competitive authoritarianism.” Venezuela, Turkey, Egypt, and Thailand are some of the better-known recent examples.

When democracy fails to deliver economically or politically, perhaps it is to be expected that some people will look for authoritarian solutions. And, for many economists, delegating economic policy to technocratic bodies in order to insulate them from the “folly of the masses” almost always is the preferred approach.

Effective institutions of restraint do not emerge overnight; and it might seem like those in power would never want to create them. But if there is some likelihood that I will be voted out of office and that the opposition will take over, such institutions will protect me from others’ abuses tomorrow as much as they protect others from my abuses today. So strong prospects for sustained political competition are a key prerequisite for illiberal democracies to turn into liberal ones over time.

Optimists believe that new technologies and modes of governance will resolve all problems and send democracies centered on the nation-state the way of the horse-drawn carriage. Pessimists fear that today’s liberal democracies will be no match for the external challenges mounted by illiberal states like China and Russia, which are guided only by hardnosed realpolitik. Either way, if democracy is to have a future, it will need to be rethought.”

In democracy and disaster, emerging world embraces 'open data'


Jeremy Wagstaff’ at Reuters: “Open data’ – the trove of data-sets made publicly available by governments, organizations and businesses – isn’t normally linked to high-wire politics, but just may have saved last month’s Indonesian presidential elections from chaos.
Data is considered open when it’s released for anyone to use and in a format that’s easy for computers to read. The uses are largely commercial, such as the GPS data from U.S.-owned satellites, but data can range from budget numbers and climate and health statistics to bus and rail timetables.
It’s a revolution that’s swept the developed world in recent years as governments and agencies like the World Bank have freed up hundreds of thousands of data-sets for use by anyone who sees a use for them. Data.gov, a U.S. site, lists more than 100,000 data-sets, from food calories to magnetic fields in space.
Consultants McKinsey reckon open data could add up to $3 trillion worth of economic activity a year – from performance ratings that help parents find the best schools to governments saving money by releasing budget data and asking citizens to come up with cost-cutting ideas. All the apps, services and equipment that tap the GPS satellites, for example, generate $96 billion of economic activity each year in the United States alone, according to a 2011 study.
But so far open data has had a limited impact in the developing world, where officials are wary of giving away too much information, and where there’s the issue of just how useful it might be: for most people in emerging countries, property prices and bus schedules aren’t top priorities.
But last month’s election in Indonesia – a contentious face-off between a disgraced general and a furniture-exporter turned reformist – highlighted how powerful open data can be in tandem with a handful of tech-smart programmers, social media savvy and crowdsourcing.
“Open data may well have saved this election,” said Paul Rowland, a Jakarta-based consultant on democracy and governance…”