Explore our articles

Stefaan Verhulst

It’s been five years since Tim O’Reilly published his screed on Government as Platform. In that time, we’ve seen “civic tech” and “open data” gain in popularity and acceptance. The Federal Government has an open data platform, data.gov. And so too do states and municipalities across America. Code for America is the hottest thing around, and the healthcare.gov fiasco landed fixing public technology as a top concern in government. We’ve successfully laid the groundwork for a new kind of government technology. We’re moving towards a day when, rather than building user facing technology, the government opens up interfaces to data that allows the private sector to create applications and websites that consume public data and surface it to users.

However, we appear to have stalled out a bit in our progress towards government as platform. It’s incredibly difficult to ingest the data for successful commercial products. The kaleidoscope of data formats in open data portals like data.gov might politely be called ‘obscure’, and perhaps more accurately, ‘perversely unusable’. Some of the data hasn’t been updated since first publication, and is quite positively too stale to use. If documentation exists, most of the time it’s incomprehensible….

What we actually need, is for Uncle Sam to start dogfooding his own open data.

For those of you who aren’t familiar with the term, dogfooding is a slang term used by engineers who are using their own product. So, for example, Google employees use Gmail and Google Drive to organize their own work. This term also applies to engineering teams that consume their public APIs to access internal data. Dogfooding helps teams deeply understand their own work from the same perspective as external users. It also provides a keen incentive to make products work well.

Dogfooding is the golden rule of platforms. And currently, open government portals are flagrantly violating this golden rule. I’ve asked around, and I can’t find a single example of a government entity consuming the data they publish…”

Hey Uncle Sam, Eat Your Own Dogfood

New Pew report By , and : “The age of gigabit connectivity is dawning and will advance in coming years. The only question is how quickly it might become widespread. A gigabit connection can deliver 1,000 megabits of information per second (Mbps). Globally, cloud service provider Akamai reports that the average global connection speed in quarter one of 2014 was 3.9 Mbps, with South Korea reporting the highest average connection speed, 23.6 Mbps and the US at 10.5 Mbps.1
In some respects, gigabit connectivity is not a new development. The US scientific community has been using hyper-fast networks for several years, changing the pace of data sharing and enabling levels of collaboration in scientific disciplines that were unimaginable a generation ago.
Gigabit speeds for the “average Internet user” are just arriving in select areas of the world. In the US, Google ran a competition in 2010 for communities to pitch themselves for the construction of the first Google Fiber network running at 1 gigabit per second—Internet speeds 50-100 times faster than the majority of Americans now enjoy. Kansas City was chosen among 1,100 entrants and residents are now signing up for the service. The firm has announced plans to build a gigabit network in Austin, Texas, and perhaps 34 other communities. In response, AT&T has said it expects to begin building gigabit networks in up to 100 US cities.2 The cities of Chattanooga, Tennessee; Lafayette, Louisiana; and Bristol, Virginia, have super speedy networks, and pockets of gigabit connectivity are in use in parts of Las Vegas, Omaha, Santa Monica, and several Vermont communities.3 There are also other regional efforts: Falcon Broadband in Colorado Springs, Colorado; Brooklyn Fiber in New York; Monkey Brains in San Francisco; MINET Fiber in Oregon; Wicked Fiber in Lawrence, Kansas; and Sonic.net in California, among others.4 NewWave expects to launch gigabit connections in 2015 in Poplar Bluff, Missouri Monroe, Rayville, Delhi; and Tallulah, Louisiana, and Suddenlink Communications has launched Operation GigaSpeed.5
In 2014, Google and Verizon were among the innovators announcing that they are testing the capabilities for currently installed fiber networks to carry data even more efficiently—at 10 gigabits per second—to businesses that handle large amounts of Internet traffic.
To explore the possibilities of the next leap in connectivity we asked thousands of experts and Internet builders to share their thoughts about likely new Internet activities and applications that might emerge in the gigabit age. We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts, many of whom play active roles in Internet evolution as technology builders, researchers, managers, policymakers, marketers, and analysts. We also invited comments from those who have made insightful predictions to our previous queries about the future of the Internet. (For more details, please see the section “About this Canvassing of Experts.”)…”

Killer Apps in the Gigabit Age

Paper by Robbie T. Nakatsu et al at the Journal of Information Science: “Although a great many different crowdsourcing approaches are available to those seeking to accomplish individual or organizational tasks, little research attention has yet been given to characterizing how those approaches might be based on task characteristics. To that end, we conducted an extensive review of the crowdsourcing landscape, including a look at what types of taxonomies are currently available. Our review found that no taxonomy explored the multidimensional nature of task complexity. This paper develops a taxonomy whose specific intent is the classification of approaches in terms of the types of tasks for which they are best suited. To develop this task-based taxonomy, we followed an iterative approach that considered over 100 well-known examples of crowdsourcing. The taxonomy considers three dimensions of task complexity: (a) task structure – is the task well-defined, or does it require a more open-ended solution; (2) task interdependence – can the task be solved by an individual, or does it require a community of problem solvers; and (3) task commitment – what level of commitment is expected from crowd members? Based on this taxonomy, we identify seven categories of crowdsourcing and discuss prototypical examples of each approach. Furnished with such an understanding, one should be able to determine which crowdsourcing approach is most suitable for a particular task situation.”

A taxonomy of crowdsourcing based on task complexity

Stephanie Thum at Digital Gov: “Customer service. Customer satisfaction. Improving the customer experience.
These buzzwords have become well-trodden territory among government strategists as a new wave of agencies attempt to ignite—or reignite—a focus on customers.
Of course, putting customers first is a worthy goal. But what, exactly, do we mean when we use words like “service” and “satisfaction”? These terms are easily understood in the abstract; however, precisely because of their broad, abstract nature, they can also become roadblocks for pinpointing the specific metrics—and sparking the right strategic conversations—that lead to true customer-oriented improvements.
To find the right foundational customer metrics, begin by looking at your agency’s strategic plan. Examine the publicly-stated goals that guide the entire organization. At Export-Import Bank (Ex-Im Bank), for example, one of our strategic goals is to improve the ease of doing business for customers. Because of this, the Customer Effort Score has become a key external measurement for the Bank in determining customers’ perceptions about our performance toward that goal. Our surveys ask customers: “How much effort did you personally have to put forth to complete your transaction with Ex-Im Bank?” Results are then shared, along with other, supplementary, survey results, within the Bank….”

Government CX: Where Do You Find the Right Foundational Metrics?

at PBS MediaShift: “…Open data is the future — of how we govern, of how public services are delivered, of how governments engage with those that they serve. And right now, it is unevenly distributed. I think there is a strong argument to be made that data standards can provide a number of benefits to small and midsized municipal governments and could provide a powerful incentive for these governments to adopt open data.
One way we can use standards to drive the adoption of open data is to partner with companies like YelpZillowGoogle and others that can use open data to enhance their services. But how do we get companies with 10s and 100s of millions of users to take an interest in data from smaller municipal governments?
In a word – standards.

Why do we care about cities?

When we talk about open data, it’s important to keep in mind that there is a lot of good work happening at the federal, state and local levels all over the country — plenty of states and even counties doing good things on the open data front, but for me it’s important to evaluate where we are on open data with respect to cities.
States typically occupy a different space in the service delivery ecosystem than cities, and the kinds of data that they typically make available can be vastly different from city data. State capitals are often far removed from our daily lives and we may hear about them only when a budget is adopted or when the state legislature takes up a controversial issue.
In cities, the people that represent and serve us us can be our neighbors — the guy behind you at the car wash, or the woman who’s child is in you son’s preschool class. Cities matter.
As cities go, we need to consider carefully that importance of smaller cities — there are a lot more of them than large cities and a non-trivial number of people live in them….”

Open Data Beyond the Big City

New book by Geoffrey Hosking: “Today there is much talk of a ‘crisis of trust’; a crisis which is almost certainly genuine, but usually misunderstood. Trust: A History offers a new perspective on the ways in which trust and distrust have functioned in past societies, providing an empirical and historical basis against which the present crisis can be examined, and suggesting ways in which the concept of trust can be used as a tool to understand our own and other societies.
Geoffrey Hosking argues that social trust is mediated through symbolic systems, such as religion and money, and the institutions associated with them, such as churches and banks. Historically these institutions have nourished trust, but the resulting trust networks have tended to create quite tough boundaries around themselves, across which distrust is projected against outsiders. Hosking also shows how nation-states have been particularly good at absorbing symbolic systems and generating trust among large numbers of people, while also erecting distinct boundaries around themselves, despite an increasingly global economy. He asserts that in the modern world it has become common to entrust major resources to institutions we know little about, and suggests that we need to learn from historical experience and temper this with more traditional forms of trust, or become an ever more distrustful society, with potentially very destabilising consequences.”

Trust: A History

Alan Hudson at Global Integrity: “…The invocation of “Good Governance” is something that happens a lot, including in ongoing discussions of whether and how governance – or governance-related issues – should be addressed in the post-2015 development framework. Rather than simply squirm uncomfortably every time someone invokes the “Good Governance” mantra, I thought it would be more constructive to explain – again (see here and here) – why I find the phrase problematic, and to outline why I think that “Open Governance” might be a more helpful formulation.
My primary discomfort with the “Good Governance” mantra is that it obscures and wishes away much of the complexity about governance. Few would disagree with the idea that: i) governance arrangements have distributional consequences; ii) governance arrangements play a role in shaping progress towards development outcomes; and iii) effective governance arrangements – forms of governance – will vary by context. But the “Good Governance” mantra, it seems to me, unhelpfully side-steps these key issues, avoiding, or at least postponing, a number of key questions: good from whose perspective, good for what, good for where?
Moreover, the notion of “Good Governance” risks giving the impression that “we” – which tends to mean people outside of the societies that they’re talking about – know what governance is good, and further still that “we” know what needs to happen to make governance good. On both counts, the evidence is that that is seldom the case.
These are not new points. A number of commentators including Merilee Grindle, Matt Andrews, Mushtaq Khan and, most recently, Brian Levy, have pointed out the problems with a “Good Governance” agenda for many years. But, despite their best efforts, in policy discussions, including around post-2015, their warnings are too rarely heeded.
However, rather than drop the language of governance entirely, I do think that there is value in a more flexible, perhaps less normative – or differently normative, more focused on function than form – notion of governance. One that centers on transparency, participation and accountability. One that is about promoting the ability of communities in particular places to address the governance challenges relating to the specific priorities that they face, and which puts people in those places – rather than outsiders – center-stage in improving governance in ways that work for them. Indeed, the targets in the Open Working Group’s Goal 16 includes important elements of this.
The “Good Governance” mantra may be hard to shake, but I remain hopeful that open governance – a more flexible framing which is about empowering people and governments with information so that they can work together to tackle problems they prioritize, in their particular places – may yet win the day. The sooner that happens, the better.”

Beyond the “Good Governance” mantra

Report edited by Francesco Mancini for the International Peace Institute: “In an era of unprecedented interconnectivity, this report explores the ways in which new technologies can assist international actors, governments, and civil society organizations to more effectively prevent violence and conflict. It examines the contributions that cell phones, social media, crowdsourcing, crisis mapping, blogging, and big data analytics can make to short-term efforts to forestall crises and to long-term initiatives to address the root causes of violence.
Five case studies assess the use of such tools in a variety of regions (Africa, Asia, Latin America) experiencing different types of violence (criminal violence, election-related violence, armed conflict, short-term crisis) in different political contexts (restrictive and collaborative governments).
Drawing on lessons and insights from across the cases, the authors outline a how-to guide for leveraging new technology in conflict-prevention efforts:
1. Examine all tools.
2. Consider the context.
3. Do no harm.
4. Integrate local input.
5. Help information flow horizontally.
6. Establish consensus regarding data use.
7. Foster partnerships for better results.”

New Technology and the Prevention of Violence and Conflict

New paper by Tiropanis Thanassis, Hall Wendy, Hendler James, and de Larrinaga Christian in Big Data: “The Web Observatory project1 is a global effort that is being led by the Web Science Trust,2 its network of WSTnet laboratories, and the wider Web Science community. The goal of this project is to create a global distributed infrastructure that will foster communities exchanging and using each other’s web-related datasets as well as sharing analytic applications for research and business web applications.3 It will provide the means to observe the digital planet, explore its processes, and understand their impact on different sectors of human activity.
The project is creating a network of separate web observatories, collections of datasets and tools for analyzing data about the Web and its use, each with their own use community. This allows researchers across the world to develop and share data, analytic approaches, publications related to their datasets, and tools (Fig. 1). The network of web observatories aims to bridge the gap that currently exists between big data analytics and the rapidly growing web of “broad data,”4 making it difficult for a large number of people to engage with them….”

The Web Observatory: A Middle Layer for Broad Data

Jonathan Zittrain at Medium: “…libraries — real ones concerned with guarding and curating knowledge — remain crucial to free and open societies, and not simply because their traditional services within academia, from curation to preservation to research, remain in high demand by scholars. More broadly, they crucially complement the Web in its highest aspirations: to provide unfettered access to knowledge, and to link authors and readers in new ways. Here’s why.

First, information may be easy to copy, but it’s also easy to poison and destroy. The Web is a distributed marvel: click on any link on a page and you’ll instantly be able to see to what it refers, whether it’s offered by the author of the page you’re already reading, or somewhere on the other side of the world, by a different person writing at a different time for a different purpose. That the act of citation and linkage could be made so easy to forge and to follow, and accessible to anyone with a Web browser rather than special patron privileges, is revolutionary.

But the very characteristics that make the distributed Net so powerful overall also make it dicey in any given use. Links rot; sources evaporate. The anarchic Web loses some luster every time that something an author meant to share turns out to be a 404-not-found error.

I co-authored a study investigating link rot in legal scholarship and judicial opinions, and was shocked to find that, circa late 2013, nearly three out of four links found within all Harvard Law Review articles were dead. Half of the links in U.S. Supreme Court opinions were dead. Before the Web, the only common link was an analog: an author had to name with great precision a source, and a reader could nearly always take that citation to a library and expect to be able to access the source. Labor intensive, but the barriers to publishing meant that most stuff linked was in books and other systematized formats that libraries were likely to store. Post-Web, much can be published without burdensome intermediaries, but if it vanishes, it vanishes.

That’s why the HLS Library is proud to be a founding member of perma.cc, a consortium complementing the extraordinary Internet Archive, seeking to preserve copies of the sources that scholars and judges link to on the open Web. The preserved materials can be readily accessible for the ages, placed on the record within a formal, disinterested, distributed repository of the world’s great libraries. This is especially important as information might not only vanish, but be adulterated. When Barnes and Noble can offer a book as canonical as War and Peace with key changes quietly (if accidentally) made to its vocabulary, it’s a signal that our knowledge requires actual guardians ready to preserve and fight for its integrity, rather than, in the words of John Perry Barlow, merely vendors treating ideas as “another industrial product, no more noble than pig iron.”…”

Why Libraries [Still] Matter

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday