Open Government -Opportunities and Challenges for Public Governance


New volume of Public Administration and Information Technology series: “Given this global context, and taking into account both the need of academicians and practitioners, it is the intention of this book to shed light on the open government concept and, in particular:
• To provide comprehensive knowledge of recent major developments of open government around the world.
• To analyze the importance of open government efforts for public governance.
• To provide insightful analysis about those factors that are critical when designing, implementing and evaluating open government initiatives.
• To discuss how contextual factors affect open government initiatives’success or failure.
• To explore the existence of theoretical models of open government.
• To propose strategies to move forward and to address future challenges in an international context.”

The Web at 25 in the U.S.


Paper by Lee Rainie and Susannah Fox from Pew: “The overall verdict: The internet has been a plus for society and an especially good thing for individual users… This report is the first part of a sustained effort through 2014 by the Pew Research Center to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-Lee. Lee wrote a paper on March 12, 1989 proposing an “information management” system that became the conceptual and architectural structure for the Web.  He eventually released the code for his system—for free—to the world on Christmas Day in 1990. It became a milestone in easing the way for ordinary people to access documents and interact over a network of computers called the internet—a system that linked computers and that had been around for years. The Web became especially appealing after Web browsers were perfected in the early 1990s to facilitate graphical displays of pages on those linked computers.”

Get Smart: Commission brings “open planning” movement to Europe to speed spread of smart cities


Press Release: “The European Commission is calling on those involved in creating smart cities to publish their efforts in order to help build an open planning movement from the ground up.
The challenge is being issued to city administrations, small and large companies and other organisations to go public with their ICT, energy and mobility plans, so that all parties can learn from each other and grow the smart city market. Through collaboration as well as traditional competition, the Europe will get smarter, more competitive and more sustainable.
The Commission is looking for both new commitments to “get smart” and for interested parties to share their current and past successes. Sharing these ideas will feed the European Innovation Partnership on Smart Cities and Communities (see IP/13/1159 and MEMO/13/1049) and networks such as the Smart Cities Stakeholder Platform, the Green Digital Charter, the Covenant of Mayors, and CIVITAS.
What’s in it for me?
If you are working in the smart cities field, joining the open planning movement will help you find the right partners, get better access to finance and make it easier to learn from your peers. You will help grow the marketplace you work in, and create export opportunities outside of Europe.
If you live in a city, you will benefit sooner from better traffic flows, greener buildings, and cheaper or more convenient services.
European Commission Vice President Neelie Kroes said, “For those of us living in cities, – we need to make sure they are smart cities. Nothing else makes sense. And nothing else is such a worldwide economic opportunity – so we need to get sharing!”.
Energy Commissioner Günther Oettinger said: “Cities and Communities can only get smart if mayors and governors are committed to apply innovative industrial solutions”.
In June 2014 the Commission will then seek to analyse, group and promote the best plans and initiatives.”

The Problem with Easy Technology


New post by at the NewYorker: “In the history of marketing, there’s a classic tale that centers on the humble cake mix. During the nineteen-fifties, there were differences of opinion over how “instant” powdered cake mixes should be, and, in particular, over whether adding an egg ought to be part of the process. The first cake mixes, invented in the nineteen-thirties, merely required water, and some people argued that this approach, the easiest, was best. But others thought bakers would want to do more. Urged on by marketing psychologists, Betty Crocker herself began to instruct housewives to “add water, and two of your own fresh eggs.”…
The choice between demanding and easy technologies may be crucial to what we have called technological evolution. We are, as I argued in my most recent piece in this series, self-evolving. We make ourselves into what we, as a species, will become, mainly through our choices as consumers. If you accept these premises, our choice of technological tools becomes all-important; by the logic of biological atrophy, our unused skills and capacities tend to melt away, like the tail of an ape. It may sound overly dramatic, but the use of demanding technologies may actually be important to the future of the human race.
Just what is a demanding technology? Three elements are defining: it is technology that takes time to master, whose usage is highly occupying, and whose operation includes some real risk of failure. By this measure, a piano is a demanding technology, as is a frying pan, a programming language, or a paintbrush. So-called convenience technologies, in contrast—like instant mashed potatoes or automatic transmissions—usually require little concentrated effort and yield predictable results.
There is much to be said for the convenience technologies that have remade human society over the past century. They often open up life’s pleasures to a wider range of people (downhill skiing, for example, can be exhausting without lifts). They also distribute technological power more widely: consider that, nowadays, you don’t need special skills to take pretty good photos, or to capture a video of police brutality. Nor should we neglect that promise first made to all Americans in the nineteen-thirties: freedom from a life of drudgery to focus on what we really care about. Life is hard enough; do we need to be churning our own butter? Convenience technologies promised more space in our lives for other things, like thought, reflection, and leisure.
That, at least, is the idea. But, even on its own terms, convenience technology has failed us. Take that promise of liberation from overwork. In 1964, Life magazine, in an article about “Too Much Leisure,” asserted that “there will certainly be a sharp decline in the average work week” and that “some prophets of what automation is doing to our economy think we are on the verge of a 30-hour week; others as low as 25 or 20.” Obviously, we blew it. Our technologies may have made us prosthetic gods, yet they have somehow failed to deliver on the central promise of free time. The problem is that, as every individual task becomes easier, we demand much more of both ourselves and others. Instead of fewer difficult tasks (writing several long letters) we are left with a larger volume of small tasks (writing hundreds of e-mails). We have become plagued by a tyranny of tiny tasks, individually simple but collectively oppressive. And, when every task in life is easy, there remains just one profession left: multitasking.
The risks of biological atrophy are even more important. Convenience technologies supposedly free us to focus on what matters, but sometimes the part that matters is what gets eliminated. Everyone knows that it is easier to drive to the top of a mountain than to hike; the views may be the same, but the feeling never is. By the same logic, we may evolve into creatures that can do more but find that what we do has somehow been robbed of the satisfaction we hoped it might contain.
The project of self-evolution demands an understanding of humanity’s relationship with tools, which is mysterious and defining. Some scientists, like the archaeologist Timothy Taylor, believe that our biological evolution was shaped by the tools our ancestors chose eons ago. Anecdotally, when people describe what matters to them, second only to human relationships is usually the mastery of some demanding tool. Playing the guitar, fishing, golfing, rock-climbing, sculpting, and painting all demand mastery of stubborn tools that often fail to do what we want. Perhaps the key to these and other demanding technologies is that they constantly require new learning. The brain is stimulated and forced to change. Conversely, when things are too easy, as a species we may become like unchallenged schoolchildren, sullen and perpetually dissatisfied.
I don’t mean to insist that everything need be done the hard way, or that we somehow need to suffer like our ancestors to achieve redemption. It isn’t somehow wrong to use a microwave rather than a wood fire to reheat leftovers. But we must take seriously our biological need to be challenged, or face the danger of evolving into creatures whose lives are more productive but also less satisfying.
There have always been groups, often outcasts, who have insisted on adhering to harder ways of doing some things. Compared to Camrys, motorcycles are unreliable, painful, and dangerous, yet some people cannot leave them alone. It may seem crazy to use command-line or plain-text editing software in an age of advanced user interfaces, but some people still do. In our times, D.I.Y. enthusiasts, hackers, and members of the maker movement are some of the people who intuitively understand the importance of demanding tools, without rejecting the idea that technology can improve the human condition. Derided for lacking a “political strategy,” they nonetheless realize that there are far more important agendas than the merely political. Whether they know it or not, they are trying to work out the future of what it means to be human, and, along the way, trying to find out how to make that existence worthwhile.”

ReThinking red tape


New Report by Deloitte on “Influencing behaviors to achieve public outcomes”:  “Governments employ many policy levers to provide for the safety and welfare of citizens. Through taxes, subsidies, laws, and regulations, governments help shape the options available to us and the choices that we ultimately make. But, in an era of fiscal and regulatory restraint, policymakers are quickly realizing that these traditional methods have their limitations, particularly, the associated costs.

Now, government leaders are turning to the increasingly politically acceptable discipline of behavioral economics as a cheap accompaniment or alternative to traditional policymaking. Popularized in recent years by a number of best-selling books, behavioral approaches have been used successfully in a number of private and public sector organizations to influence citizens to make better choices. Deloitte’s GovLab examines successful cases from across the globe and provides practical advice for determining when behavioral approaches can add value and help achieve a positive societal outcome, in the report ReThinking red tape: Influencing behaviors to achieve public outcomes.”

How Cabinet Size and Legislative Control Shape the Strength of Transparency Laws


New Article by Gregory Michener in Governance: “Prevailing thinking surrounding the politics of secrecy and transparency is biased by assumptions regarding single-party and small coalition governments. Here, the “politics of secrecy” dominates: Leaders delay or resist strong transparency and freedom of information (FOI) policies when they control parliament, and yield to strong laws because of imposition, symbolic ambition, or concessions when they do not. In effect, leaders weigh the benefits of secrecy against gains in monitorial capacity. Their support for strong transparency policies grows as the number of parties in their cabinet rises. So while the costs of surrendering secrecy trump the benefits of strong transparency reforms in single-party governments, in broad multiparty coalitions leaders trade secrecy for tools to monitor coalition “allies.” Drawing on vivid international examples, patterns of FOI reform in Latin America, and an in-depth study of FOI in Brazil, this article generates new theoretical insights into transparency and the “politics of monitoring.”

New study proves economic benefits of open data for Berlin


ePSI Platform: “The study “Digitales Gold: Nutzen und Wertschöpfung durch Open Data für Berlin” – or “Digital Gold: the open data benefits and its added value for Berlin” in english – released by TSB Technologiestiftung Berlin estimates that Open Data will bring around 32 million euros per year of economic benefit to the city of Berlin for the next few years. …

The estimations made for Berlin are inspired by previous reasoning included in two other studies: Pollock R. (2011), Welfare Gains from opening up public sector information in the UK; and Fuchs, S. et al. (2013), Open Government Data – Offene Daten für Österreich. Mit  Community-Strategien von heute zum Potential von morgen.
Upon presenting the study  data journalist Michael Hörz shows various examples of how to develop interesting new information and services with publicly available information. You can read more about it (in German) here.”

This algorithm can predict a revolution


Russell Brandom at the Verge: “For students of international conflict, 2013 provided plenty to examine. There was civil war in Syria, ethnic violence in China, and riots to the point of revolution in Ukraine. For those working at Duke University’s Ward Lab, all specialists in predicting conflict, the year looks like a betting sheet, full of predictions that worked and others that didn’t pan out.

Guerrilla campaigns intensified, proving out the prediction

When the lab put out their semiannual predictions in July, they gave Paraguay a 97 percent chance of insurgency, largely based on reports of Marxist rebels. The next month, guerrilla campaigns intensified, proving out the prediction. In the case of China’s armed clashes between Uighurs and Hans, the models showed a 33 percent chance of violence, even as the cause of each individual flare-up was concealed by the country’s state-run media. On the other hand, the unrest in Ukraine didn’t start raising alarms until the action had already started, so the country was left off the report entirely.

According to Ward Lab’s staff, the purpose of the project isn’t to make predictions but to test theories. If a certain theory of geopolitics can predict an uprising in Ukraine, then maybe that theory is onto something. And even if these specialists could predict every conflict, it would only be half the battle. “It’s a success only if it doesn’t come at the cost of predicting a lot of incidents that don’t occur,” says Michael D. Ward, the lab’s founder and chief investigator, who also runs the blog Predictive Heuristics. “But it suggests that we might be on the right track.”

If a certain theory of geopolitics can predict an uprising in Ukraine, maybe that theory is onto something

Forecasting the future of a country wasn’t always done this way. Traditionally, predicting revolution or war has been a secretive project, for the simple reason that any reliable prediction would be too valuable to share. But as predictions lean more on data, they’ve actually become harder to keep secret, ushering in a new generation of open-source prediction models that butt against the siloed status quo.

Will this country’s government face an acute existential threat in the next six months?

The story of automated conflict prediction starts at the Defense Advance Research Projects Agency, known as the Pentagon’s R&D wing. In the 1990s, DARPA wanted to try out software-based approaches to anticipating which governments might collapse in the near future. The CIA was already on the case, with section chiefs from every region filing regular forecasts, but DARPA wanted to see if a computerized approach could do better. They looked at a simple question: will this country’s government face an acute existential threat in the next six months? When CIA analysts were put to the test, they averaged roughly 60 percent accuracy, so DARPA’s new system set the bar at 80 percent, looking at 29 different countries in Asia with populations over half a million. It was dubbed ICEWS, the Integrated Conflict Early Warning System, and it succeeded almost immediately, clearing 80 percent with algorithms built on simple regression analysis….

On the data side, researchers at Georgetown University are cataloging every significant political event of the past century into a single database called GDELT, and leaving the whole thing open for public research. Already, projects have used it to map the Syrian civil war and diplomatic gestures between Japan and South Korea, looking at dynamics that had never been mapped before. And then, of course, there’s Ward Lab, releasing a new sheet of predictions every six months and tweaking its algorithms with every development. It’s a mirror of the same open-vs.-closed debate in software — only now, instead of fighting over source code and security audits, it’s a fight over who can see the future the best.”

Big Data, Big New Businesses


Nigel Shaboldt and Michael Chui: “Many people have long believed that if government and the private sector agreed to share their data more freely, and allow it to be processed using the right analytics, previously unimaginable solutions to countless social, economic, and commercial problems would emerge. They may have no idea how right they are.

Even the most vocal proponents of open data appear to have underestimated how many profitable ideas and businesses stand to be created. More than 40 governments worldwide have committed to opening up their electronic data – including weather records, crime statistics, transport information, and much more – to businesses, consumers, and the general public. The McKinsey Global Institute estimates that the annual value of open data in education, transportation, consumer products, electricity, oil and gas, health care, and consumer finance could reach $3 trillion.

These benefits come in the form of new and better goods and services, as well as efficiency savings for businesses, consumers, and citizens. The range is vast. For example, drawing on data from various government agencies, the Climate Corporation (recently bought for $1 billion) has taken 30 years of weather data, 60 years of data on crop yields, and 14 terabytes of information on soil types to create customized insurance products.

Similarly, real-time traffic and transit information can be accessed on smartphone apps to inform users when the next bus is coming or how to avoid traffic congestion. And, by analyzing online comments about their products, manufacturers can identify which features consumers are most willing to pay for, and develop their business and investment strategies accordingly.

Opportunities are everywhere. A raft of open-data start-ups are now being incubated at the London-based Open Data Institute (ODI), which focuses on improving our understanding of corporate ownership, health-care delivery, energy, finance, transport, and many other areas of public interest.

Consumers are the main beneficiaries, especially in the household-goods market. It is estimated that consumers making better-informed buying decisions across sectors could capture an estimated $1.1 trillion in value annually. Third-party data aggregators are already allowing customers to compare prices across online and brick-and-mortar shops. Many also permit customers to compare quality ratings, safety data (drawn, for example, from official injury reports), information about the provenance of food, and producers’ environmental and labor practices.

Consider the book industry. Bookstores once regarded their inventory as a trade secret. Customers, competitors, and even suppliers seldom knew what stock bookstores held. Nowadays, by contrast, bookstores not only report what stock they carry but also when customers’ orders will arrive. If they did not, they would be excluded from the product-aggregation sites that have come to determine so many buying decisions.

The health-care sector is a prime target for achieving new efficiencies. By sharing the treatment data of a large patient population, for example, care providers can better identify practices that could save $180 billion annually.

The Open Data Institute-backed start-up Mastodon C uses open data on doctors’ prescriptions to differentiate among expensive patent medicines and cheaper “off-patent” varieties; when applied to just one class of drug, that could save around $400 million in one year for the British National Health Service. Meanwhile, open data on acquired infections in British hospitals has led to the publication of hospital-performance tables, a major factor in the 85% drop in reported infections.

There are also opportunities to prevent lifestyle-related diseases and improve treatment by enabling patients to compare their own data with aggregated data on similar patients. This has been shown to motivate patients to improve their diet, exercise more often, and take their medicines regularly. Similarly, letting people compare their energy use with that of their peers could prompt them to save hundreds of billions of dollars in electricity costs each year, to say nothing of reducing carbon emissions.

Such benchmarking is even more valuable for businesses seeking to improve their operational efficiency. The oil and gas industry, for example, could save $450 billion annually by sharing anonymized and aggregated data on the management of upstream and downstream facilities.

Finally, the move toward open data serves a variety of socially desirable ends, ranging from the reuse of publicly funded research to support work on poverty, inclusion, or discrimination, to the disclosure by corporations such as Nike of their supply-chain data and environmental impact.

There are, of course, challenges arising from the proliferation and systematic use of open data. Companies fear for their intellectual property; ordinary citizens worry about how their private information might be used and abused. Last year, Telefónica, the world’s fifth-largest mobile-network provider, tried to allay such fears by launching a digital confidence program to reassure customers that innovations in transparency would be implemented responsibly and without compromising users’ personal information.

The sensitive handling of these issues will be essential if we are to reap the potential $3 trillion in value that usage of open data could deliver each year. Consumers, policymakers, and companies must work together, not just to agree on common standards of analysis, but also to set the ground rules for the protection of privacy and property.”