Open Data For Social Good: The Case For Better Transport Services


 at TechWeek Europe: “The growing focus on data protection, driven partly by stronger legislation and partly by consumer pressure, has put the debate on the benefits of open data somewhat on the back burner.

The continuing spate of high-profile data breaches and the abuse of public trust in the form of constant bombardment of automated calls, spam emails and clumsily ‘personalised’ advertising has done little to further the open data agenda. In fact it left many consumers feeling lukewarm about the prospects of organisations opening up their data feeds, even at a promise of a better service in return.

That’s a worrying trend. In many industries effective use of open data can lead to development of solutions that address some of the major challenges populations are faced with today, allowing for faster innovation and adaptability to change. There are significant ways in which individuals, and society as a whole could benefit from open data, if organisations and governments get data sharing right.

Open data for transport

A good example is city transportation. Many metropolises face a major challenge – growing populations are placing pressure on current infrastructure systems, leading to congestion and inefficiency.

An open data system, where commuters use a single travel account for all travel transactions and information – whether that’s public transport, walking, using the bike, using Uber, and so on, would give the city unprecedented insight into how people commute and what’s behind their travel choices.

The key to engaging the public with this is the condition that data is used responsibly and for the greater good. Currently, Transport for London (TfL) operates a meet-in-the-middle model. Consumers can travel anonymously on the TfL network, with only the point of entry and point of exit being recorded, and the company provides that anonymised data to third-party app developers who can then use it to release useful travel applications.

TfL doesn’t profit from sharing consumer data but it does enjoy the benefits that come with it. Third-party travel applications make it easier for commuters to use TfL’s network and make the service itself appear more efficient – in short, everyone benefits.

Mutual benefit

Let’s now imagine a scenario that takes this mutually beneficial relationship a step forward, with consumers willingly giving up some information about themselves to the responsible parties (in this case, the city) and receiving personalised service in return. In this scenario, the more information commuters can provide to the system, the more useful the system can be to them.

Apart from providing personalised travel information and recommendations, such a system would have one more important benefit – it would enable cities to encourage greater social responsibility, extending the benefits from the individual to the community as a whole….(More)”

Big Data Quality: a Roadmap for Open Data


Paper by Paolo Ciancarini, Francesco Poggi and Daniel Russo: “Open Data (OD) is one of the most discussed issue of Big Data which raised the joint interest of public institutions, citizens and private companies since 2009. In addition to transparency in public administrations, another key objective of these initiatives is to allow the development of innovative services for solving real world problems, creating value in some positive and constructive way. However, the massive amount of freely available data has not yet brought the expected effects: as of today, there is no application that has exploited the potential provided by large and distributed information sources in a non-trivial way, nor any service has substantially changed for the better the lives of people. The era of a new generation applications based on open data is far to come. In this context, we observe that OD quality is one of the major threats to achieving the goals of the OD movement. The starting point of this study is the quality of the OD released by the five Constitutional offices of Italy. W3C standards about OD are widely known accepted in Italy by the Italian Digital Agency (AgID). According to the most recent Italian Laws the Public Administration may release OD according to the AgID standards. Our exploratory study aims to assess the quality of such releases and the real implementations of OD. The outcome suggests the need of a drastic improvement in OD quality. Finally we highlight some key quality principles for OD, and propose a roadmap for further research….(more)”

Why Didn’t E-Gov Live Up To Its Promise?


Excerpt from the report Delivering on Digital: The Innovators and Technologies that are Transforming Government” by William Eggers: “Digital is becoming the new normal. Digital technologies have quietly and quickly pervaded every facet of our daily lives, transforming how we eat, shop, work, play and think.

An aging population, millennials assuming managerial positions, budget shortfalls and ballooning entitlement spending all will significantly impact the way government delivers services in the coming decade, but no single factor will alter citizens’ experience of government more than the pure power of digital technologies.

Ultimately, digital transformation means reimagining virtually every facet of what government does, from headquarters to the field, from health and human services to transportation and defense.

By now, some of you readers with long memories can’t be blamed for feeling a sense of déjà vu.

After all, technology was supposed to transform government 15 years ago; an “era of electronic government” was poised to make government faster, smaller, digitized and increasingly transparent.

Many analysts (including yours truly, in a book called “Government 2.0”) predicted that by 2016, digital government would already long be a reality. In practice, the “e-gov revolution” has been an exceedingly slow-moving one. Sure, technology has improved some processes, and scores of public services have moved online, but the public sector has hardly been transformed.

What initial e-gov efforts managed was to construct pretty storefronts—in the form of websites—as the entrance to government systems stubbornly built for the industrial age. Few fundamental changes altered the structures, systems and processes of government behind those websites.

With such halfhearted implementation, the promise of cost savings from information technology failed to materialize, instead disappearing into the black hole of individual agency and division budgets. Government websites mirrored departments’ short-term orientation rather than citizens’ long-term needs. In short, government became wired—but not transformed.

So why did the reality of e-gov fail to live up to the promise?

For one thing, we weren’t yet living in a digitized economy—our homes, cars and workplaces were still mostly analog—and the technology wasn’t as far along as we thought; without the innovations of cloud computing and open-source software, for instance, the process of upgrading giant, decades-old legacy systems proved costly, time-consuming and incredibly complex.

And not surprisingly, most governments—and private firms, for that matter—lacked deep expertise in managing digital services. What we now call “agile development”—an iterative development model that allows for constant evolution through recurrent testing and evaluation—was not yet mainstreamed.

Finally, most governments explicitly decided to focus first on the Hollywood storefront and postpone the bigger and tougher issues of reengineering underlying processes and systems. When budgets nosedived—even before the recession—staying solvent and providing basic services took precedence over digital transformation.

The result: Agencies automated some processes but failed to transform them; services were put online, but rarely were they focused logically and intelligently around the citizen.

Given this history, it’s natural to be skeptical after years of hype about government’s amazing digital future. But conditions on the ground (and in the cloud) are finally in place for change, and citizens are not only ready for digital government—many are demanding it.

Digital-native millennials are now consumers of public services, and millions of them work in and around government; they won’t tolerate balky and poorly designed systems, and they’ll let the world know through social media. Gen Xers and baby boomers, too, have become far more savvy consumers of digital products and services….(More)”

While governments talk about smart cities, it’s citizens who create them


Carlo Ratti at the Conversation: “The Australian government recently released an ambitious Smart Cities Plan, which suggests that cities should be first and foremost for people:

If our cities are to continue to meet their residents’ needs, it is essential for people to engage and participate in planning and policy decisions that have an impact on their lives.

Such statements are a good starting point – and should probably become central to Australia’s implementation efforts. A lot of knowledge has been collected over the past decade from successful and failed smart cities experiments all over the world; reflecting on them could provide useful information for the Australian government as it launches its national plan.

What is a smart city?

But, before embarking on such review, it would help to start from a definition of “smart city”.

The term has been used and abused in recent years, so much so that today it has lost meaning. It is often used to encompass disparate applications: we hear people talk and write about “smart city” when they refer to anything from citizen engagement to Zipcar, from open data to Airbnb, from smart biking to broadband.

Where to start with a definition? It is a truism to say the internet has transformed our lives over the past 20 years. Everything in the way we work, meet, mate and so on is very different today than it was just a few decades ago, thanks to a network of connectivity that now encompasses most people on the planet.

In a similar way, we are today at the beginning of a new technological revolution: the internet is entering physical space – the very space of our cities – and is becoming the Internet of Things; it is opening the door to a new world of applications that, as with the first wave of the internet, can incorporate many domains….

What should governments do?

In the above technological context, what should governments do? Over the past few years, the first wave of smart city applications followed technological excitement.

For instance, some of Korea’s early experiments such as Songdo City were engineered by the likes of Cisco, with technology deployment assisted by top-down policy directives.

In a similar way, in 2010, Rio de Janeiro launched the Integrated Centre of Command and Control, engineered by IBM. It’s a large control room for the city, which collects real-time information from cameras and myriad sensors suffused in the urban fabric.

Such approaches revealed many shortcomings, most notably the lack of civic engagement. It is as if they thought of the city simply as a “computer in open air”. These approaches led to several backlashes in the research and academic community.

A more interesting lesson can come from the US, where the focus is more on developing a rich Internet of Things innovation ecosystem. There are many initiatives fostering spaces – digital and physical – for people to come together and collaborate on urban and civic innovations….

That isn’t to say that governments should take a completely hands-off approach to urban development. Governments certainly have an important role to play. This includes supporting academic research and promoting applications in fields that might be less appealing to venture capital – unglamorous but nonetheless crucial domains such as municipal waste or water services.

The public sector can also promote the use of open platforms and standards in such projects, which would speed up adoption in cities worldwide.

Still, the overarching goal should always be to focus on citizens. They are in the best position to determine how to transform their cities and to make decisions that will have – as the Australian Smart Cities Plan puts it – “an impact on their lives”….(more)”

Private Data and the Public Good


Gideon Mann‘s remarks on the occasion of the Robert Khan distinguished lecture at The City College of New York on 5/22/16: and opportunities about a specific aspect of this relationship, the broader need for computer science to engage with the real world. Right now, a key aspect of this relationship is being built around the risks and opportunities of the emerging role of data.

Ultimately, I believe that these relationships, between computer science andthe real world, between data science and real problems, hold the promise tovastly increase our public welfare. And today, we, the people in this room,have a unique opportunity to debate and define a more moral dataeconomy….

The hybrid research model proposes something different. The hybrid research model, embeds, as it were, researchers as practitioners.The thought was always that you would be going about your regular run of business,would face a need to innovate to solve a crucial problem, and would do something novel. At that point, you might choose to work some extra time and publish a paper explaining your innovation. In practice, this model rarely works as expected. Tight deadlines mean the innovation that people do in their normal progress of business is incremental..

This model separated research from scientific publication, and shortens thetime-window of research, to what can be realized in a few year time zone.For me, this always felt like a tremendous loss, with respect to the older so-called “ivory tower” research model. It didn’t seem at all clear how this kindof model would produce the sea change of thought engendered byShannon’s work, nor did it seem that Claude Shannon would ever want towork there. This kind of environment would never support the freestanding wonder, like the robot mouse that Shannon worked on. Moreover, I always believed that crucial to research is publication and participation in the scientific community. Without this engagement, it feels like something different — innovation perhaps.

It is clear that the monopolistic environment that enabled AT&T to support this ivory tower research doesn’t exist anymore. .

Now, the hybrid research model was one model of research at Google, butthere is another model as well, the moonshot model as exemplified byGoogle X. Google X brought together focused research teams to driveresearch and development around a particular project — Google Glass and the Self-driving car being two notable examples. Here the focus isn’t research, but building a new product, with research as potentially a crucial blocking issue. Since the goal of Google X is directly to develop a new product, by definition they don’t publish papers along the way, but they’re not as tied to short-term deliverables as the rest of Google is. However, they are again decidedly un-Bell-Labs like — a secretive, tightly focused, non-publishing group. DeepMind is a similarly constituted initiative — working, for example, on a best-in-the-world Go playing algorithm, with publications happening sparingly.

Unfortunately, both of these approaches, the hybrid research model and the moonshot model stack the deck towards a particular kind of research — research that leads to relatively short term products that generate corporate revenue. While this kind of research is good for society, it isn’t the only kind of research that we need. We urgently need research that is longterm, and that is undergone even without a clear financial local impact. Insome sense this is a “tragedy of the commons”, where a shared public good (the commons) is not supported because everyone can benefit from itwithout giving back. Academic research is thus a non-rival, non-excludible good, and thus reasonably will be underfunded. In certain cases, this takes on an ethical dimension — particularly in health care, where the choice ofwhat diseases to study and address has a tremendous potential to affect human life. Should we research heart disease or malaria? This decisionmakes a huge impact on global human health, but is vastly informed by the potential profit from each of these various medicines….

Private Data means research is out of reach

The larger point that I want to make, is that in the absence of places where long-term research can be done in industry, academia has a tremendous potential opportunity. Unfortunately, it is actually quite difficult to do the work that needs to be done in academia, since many of the resources needed to push the state of the art are only found in industry: in particular data.

Of course, academia also lacks machine resources, but this is a simpler problem to fix — it’s a matter of money, resources form the government could go to enabling research groups building their own data centers or acquiring the computational resources from the market, e.g. Amazon. This is aided by the compute philanthropy that Google and Microsoft practice that grant compute cycles to academic organizations.

But the data problem is much harder to address. The data being collected and generated at private companies could enable amazing discoveries and research, but is impossible for academics to access. The lack of access to private data from companies actually is much more significant effects than inhibiting research. In particular, the consumer level data, collected by social networks and internet companies could do much more than ad targeting.

Just for public health — suicide prevention, addiction counseling, mental health monitoring — there is enormous potential in the use of our online behavior to aid the most needy, and academia and non-profits are set-up to enable this work, while companies are not.

To give a one examples, anorexia and eating disorders are vicious killers. 20 million women and 10 million men suffer from a clinically significant eating disorder at some time in their life, and sufferers of eating disorders have the highest mortality rate of any other mental health disorder — with a jaw-dropping estimated mortality rate of 10%, both directly from injuries sustained by the disorder and by suicide resulting from the disorder.

Eating disorders are particular in that sufferers often seek out confirmatory information, blogs, images and pictures that glorify and validate what sufferers see as “lifestyle” choices. Browsing behavior that seeks out images and guidance on how to starve yourself is a key indicator that someone is suffering. Tumblr, pinterest, instagram are places that people host and seek out this information. Tumblr has tried to help address this severe mental health issue by banning blogs that advocate for self-harm and by adding PSA announcements to query term searches for queries for or related to anorexia. But clearly — this is not the be all and end all of work that could be done to detect and assist people at risk of dying from eating disorders. Moreover, this data could also help understand the nature of those disorders themselves…..

There is probably a role for a data ombudsman within private organizations — someone to protect the interests of the public’s data inside of an organization. Like a ‘public editor’ in a newspaper according to how you’ve set it up. There to protect and articulate the interests of the public, which means probably both sides — making sure a company’s data is used for public good where appropriate, and making sure the ‘right’ to privacy of the public is appropriately safeguarded (and probably making sure the public is informed when their data is compromised).

Next, we need a platform to make collaboration around social good between companies and between companies and academics. This platform would enable trusted users to have access to a wide variety of data, and speed process of research.

Finally, I wonder if there is a way that government could support research sabbaticals inside of companies. Clearly, the opportunities for this research far outstrip what is currently being done…(more)”

Big data: Issues for an international political sociology of the datafication of worlds


Paper by Madsen, A.K.; Flyverbom, M.; Hilbert, M. and Ruppert, Evelyn: “The claim that big data can revolutionize strategy and governance in the context of international relations is increasingly hard to ignore. Scholars of international political sociology have mainly discussed this development through the themes of security and surveillance. The aim of this paper is to outline a research agenda that can be used to raise a broader set of sociological and practice-oriented questions about the increasing datafication of international relations and politics. First, it proposes a way of conceptualizing big data that is broad enough to open fruitful investigations into the emerging use of big data in these contexts. This conceptualization includes the identification of three moments contained in any big data practice. Secondly, it suggests a research agenda built around a set of sub-themes that each deserve dedicated scrutiny when studying the interplay between big data and international relations along these moments. Through a combination of these moments and sub-themes, the paper suggests a roadmap for an international political sociology of the datafication of worlds….(more)”

#OpenZika project


World Community Grid: “In February 2016, the World Health Organization declared the Zika virus to be a global public health emergency due to its rapid spread and new concerns about its link to a rise in neurological conditions.

The virus is rapidly spreading in new geographic areas such as the Americas, where people have not been previously exposed to the disease and therefore have little immunity to it. In April 2016, the Centers for Disease Control announced that a rise in severe neurological disorders, especially in children, has been linked to the Zika virus. Some pregnant women who have contracted the Zika virus have given birth to infants with a condition called microcephaly, which results in brain development issues typically leading to severe mental deficiencies. In other cases, paralysis and other neurological problems can occur, even in adults.

Problem

Currently, there is no vaccine to provide immunity to the disease and no antiviral drug for curing Zika, although various efforts are underway. Even though the virus was first identified in 1947, there has been little research since then, because the symptoms of the infection are usually mild. However, new data on links between Zika and microcephaly or other neurological issues have revealed that the disease may not be so benign, prompting the need for intensified research efforts.

Proposed Solution

The OpenZika project on World Community Grid aims to identify drug candidates to treat the Zika virus in someone who has been infected. The project will target proteins that the Zika virus likely uses to survive and spread in the body, based on what is known from similar diseases, such as dengue virus and yellow fever. In order to develop an anti-Zika drug, researchers need to identify which of millions of chemical compounds might be effective at interfering with these key proteins. The effectiveness of each compound will be tested in virtual experiments, called “docking calculations,” performed on World Community Grid volunteers’ computers and Android devices. These calculations would help researchers focus on the most likely compounds that may eventually lead to an antiviral medicine….(More)”

Real-Time Data Can Improve Traffic Management in Major Cities


World Bank: “Traffic management agencies and city planners will soon have access to real-time data to better manage traffic flows on the streets of Cebu City and Metro Manila.

Grab, The World Bank, and the Department of Transportation and Communications (DOTC) launched today the OpenTraffic initiative, which will help address traffic congestion and road safety challenges.

Grab is the leading ride-hailing platform in Southeast Asia and operates in 30 cities across six countries – Singapore, Indonesia, Philippines, Malaysia, Thailand, and Vietnam.

Grab and the World Bank have been developing free, open-source tools that translate Grab’s voluminous driver GPS data into traffic statistics, including speeds, flows, and intersection delays. These statistics power big data open source tools such as OpenTraffic, for analysing traffic speeds and flows, and DRIVER, for identifying road incident blackspots and improving emergency response. Grab and the World Bank plan to make OpenTraffic available to other Southeast Asian city governments in the near future.

“Using big data is one of the potential solutions to the challenges faced by our transport systems. Through this we can provide accurate, real-time information for initiatives that can help alleviate traffic congestion and improve road safety,” said DOTC Secretary Joseph Emilio A. Abaya.

Last month, the World Bank and DOTC helped train more than 200 government staff from the agency, the Philippine National Police (PNP), the Metro Manila Development Authority (MMDA), the Department of Public Works and Highways (DPWH), and the Cebu City Transportation Office on the use of the OpenTraffic platform….In the near future, traffic statistics derived through OpenTraffic will be fed into another application called “DRIVER” or Data for Road Incident Visualization, Evaluation, and Reporting for road incident recording and analysis. This application, developed by the World Bank, will help engineering units to prioritize crash-prone areas for interventions and improve emergency response….(More)”

An App to Save Syria’s Lost Generation? What Technology Can and Can’t Do


 in Foreign Affairs: ” In January this year, when the refugee and migrant crisis in Europe had hit its peak—more than a million had crossed into Europe over the course of 2015—the U.S. State Department and Google hosted a forum of over 100 technology experts. The goal was to “bridge the education gap for Syrian refugee children.” Speaking to the group assembled at Stanford University, Deputy Secretary of State Antony Blinken announced a $1.7 million prize “to develop a smartphone app that can help Syrian children learn how to read and improve their wellbeing.” The competition, known as EduApp4Syria, is being run by the Norwegian Agency for Development Cooperation (Norad) and is supported by the Australian government and the French mobile company Orange.

Less than a month later, a group called Techfugees brought together over 100 technologists for a daylong brainstorm in New York City focused exclusively on education solutions. “We are facing the largest refugee crisis since World War II,” said U.S. Ambassador to the United Nations Samantha Power to open the conference. “It is a twenty-first-century crisis and we need a twenty-first-century solution.” Among the more promising, according to Power, were apps that enable “refugees to access critical services,” new “web platforms connecting refugees with one another,” and “education programs that teach refugees how to code.”

For example, the nonprofit PeaceGeeks created the Services Advisor app for the UN Refugee Agency, which maps the location of shelters, food distribution centers, and financial services in Jordan….(More)”

Open data + increased disclosure = better public-private partnerships


David Bloomgarden and Georg Neumann at Fomin Blog: “The benefits of open and participatory public procurement are increasingly being recognized by international bodies such as the Group of 20 major economies, the Organisation for Economic Co-operation and Development, and multilateral development banks. Value for money, more competition, and better goods and services for citizens all result from increased disclosure of contract data. Greater openness is also an effective tool to fight fraud and corruption.

However, because public-private partnerships (PPPs) are planned during a long timeframe and involve a large number of groups, therefore, implementing greater levels of openness in disclosure is complicated. This complexity can be a challenge to good design. Finding a structured and transparent approach to managing PPP contract data is fundamental for a project to be accepted and used by its local community….

In open contracting, all data is disclosed during the public procurement process—from the planning stage, to the bidding and awarding of the contract, to the monitoring of the implementation. A global open source data standard is used to publish that data, which is already being implemented in countries as diverse as Canada, Paraguay, and the Ukraine. Using open data throughout the contracting process provides opportunities to innovate in managing bids, fixing problems, and integrating feedback as needed. Open contracting contributes to the overall social and environmental sustainability of infrastructure investments.

In the case of Mexico’s airport, the project publishes details of awarded contracts, including visualizing the flow of funds and detailing the full amounts of awarded contracts and renewable agreements. Standardized, timely, and open data that follow global standards such as the Open Contracting Data Standard will make this information useful for analysis of value for money, cost-benefit, sustainability, and monitoring performance. Crucially, open contracting will shift the focus from the inputs into a PPP, to the outputs: the goods and services being delivered.

Benefits of open data for PPPs

We think that better and open data will lead to better PPPs. Here’s how:

1. Using user feedback to fix problems

The Brazilian state of Minas Gerais has been a leader in transparent PPP contracts with full proactive disclosure of the contract terms, as well as of other relevant project information—a practice that puts a government under more scrutiny but makes for better projects in the long run.

According to Marcos Siqueira, former head of the PPP Unit in Minas Gerais, “An adequate transparency policy can provide enough information to users so they can become contract watchdogs themselves.”

For example, a public-private contract was signed in 2014 to build a $300 million waste treatment plant for 2.5 million people in the metropolitan area of Belo Horizonte, the capital of Minas Gerais. As the team members conducted appraisals, they disclosed them on the Internet. In addition, the team held around 20 public meetings and identified all the stakeholders in the project. One notable result of the sharing and discussion of this information was the relocation of the facility to a less-populated area. When the project went to the bidding phase, it was much closer to the expectations of its various stakeholders.

2. Making better decisions on contracts and performance

Chile has been a leader in developing PPPs (which it refers to as concessions) for several decades, in a range of sectors: urban and inter-urban roads, seaports, airports, hospitals, and prisons. The country tops the list for the best enabling environment for PPPs in Latin America and the Caribbean, as measured by Infrascope, an index produced by the Economist Intelligence Unit and the Multilateral Investment Fund of the IDB Group.

Chile’s distinction is that it discloses information on performance of PPPs that are underway. The government’s Concessions Unit regularly publishes summaries of the projects during their different phases, including construction and operation. The reports are non-technical, yet include all the necessary information to understand the scope of the project…(More)”