Business Models That Take Advantage of Open Data Opportunities


Mark Boyd at the Programmeableweb: “At last week’s OKFestival in Berlin, Kat Borlongan and Chloé Bonnet from Parisian open data startup Five By Five moderated an interactive speed-geek session to examine how startups are building viability using open data and open data APIs. The picture that emerged revealed a variety of composite approaches being used, with all those presenting having just one thing in common: a commitment to fostering ecosystems that will allow other startups to build alongside them.
The OKFestival—hosted by the Open Knowledge Foundation—brought together more than 1,000 participants from around the globe working on various aspects of the open data agenda: the use of corporate data, open science research, government open data and crowdsourced data projects.
In a session held on the first day of the event, Borlongan facilitated an interactive workshop to help would-be entrepreneurs understand how startups are building business models that take advantage of open data opportunities to create sustainable, employment-generating businesses.
Citing research from the McKinsey Institute that calculates the value of open data to be worth $3 trillion globally, Borlongan said: “So the understanding of the open data process is usually: We throw open data over the wall, then we hold a hackathon, and then people will start making products off it, and then we make the $3 trillion.”
Borlongan argued that it is actually a “blurry identity to be an open data startup” and encouraged participants to unpack, with each of the startups presenting exactly how income can be generated and a viable business built in this space.
Jeni Tennison, from the U.K.’s Open Data Institute (which supports 15 businesses in its Startup Programme) categorizes two types of business models:

  1. Businesses that publish (but do not sell) open data.
  2. Businesses built on top of using open data.

Businesses That Publish but Do Not Sell Open Data

At the Open Data Institute, Tennison is investigating the possibility of an open address database that would provide street address data for every property in the U.K. She describes three types of business models that could be created by projects that generated and published such data:
Freemium: In this model, the bulk data of open addresses could be made available freely, “but if you want an API service, then you would pay for it.” Tennison pointed to lots of opportunities also to degrade the freemium-level data—for example, having it available in bulk but not at a particularly granular level (unless you pay for it), or by provisioning reuse on a share-only basis, but you would pay if you wanted the data for corporate use cases (similar to how OpenCorporates sells access to its data).
Cross-subsidy: In this approach, the data would be available, and the opportunities to generate income would come from providing extra services, like consultancy or white labeling data services alongside publishing the open data.
Network: In this business model, value is created by generating a network effect around the core business interest, which may not be the open data itself. As an example, Tennison suggested that if a post office or delivery company were to create the open address database, it might be interested in encouraging private citizens to collaboratively maintain or crowdsource the quality of the data. The revenue generated by this open data would then come from reductions in the cost of delivery services as the data improved accuracy.

Businesses Built on Top of Open Data

Six startups working in unique ways to make use of available open data also presented their business models to OKFestival attendees: Development Seed, Mapbox, OpenDataSoft, Enigma.io, Open Bank API, and Snips.

Startup: Development Seed
What it does: Builds solutions for development, public health and citizen democracy challenges by creating open source tools and utilizing open data.
Open data API focus: Regularly uses open data APIs in its projects. For example, it worked with the World Bank to create a data visualization website built on top of the World Bank API.
Type of business model: Consultancy, but it has also created new businesses out of the products developed as part of its work, most notably Mapbox (see below).

Startup: Enigma.io
What it does: Open data platform with advanced discovery and search functions.
Open data API focus: Provides the Enigma API to allow programmatic access to all data sets and some analytics from the Enigma platform.
Type of business model: SaaS including a freemium plan with no degradation of data and with access to API calls; some venture funding; some contracting services to particular enterprises; creating new products in Enigma Labs for potential later sale.

Startup: Mapbox
What it does: Enables users to design and publish maps based on crowdsourced OpenStreetMap data.
Open data API focus: Uses OpenStreetMap APIs to draw data into its map-creation interface; provides the Mapbox API to allow programmatic creation of maps using Mapbox web services.
Type of business model: SaaS including freemium plan; some tailored contracts for big map users such as Foursquare and Evernote.

Startup: Open Bank Project
What it does: Creates an open source API for use by banks.
Open data API focus: Its core product is to build an API so that banks can use a standard, open source API tool when creating applications and web services for their clients.
Type of business model: Contract license with tiered SLAs depending on the number of applications built using the API; IT consultancy projects.

Startup: OpenDataSoft
What it does: Provides an open data publishing platform so that cities, governments, utilities and companies can publish their own data portal for internal and public use.
Open data API focus: It’s able to route data sources into the portal from a publisher’s APIs; provides automatic API-creation tools so that any data set uploaded to the portal is then available as an API.
Type of business model: SaaS model with freemium plan, pricing by number of data sets published and number of API calls made against the data, with free access for academic and civic initiatives.

Startup: Snips
What it does: Predictive modeling for smart cities.
Open data API focus: Channels some open and client proprietary data into its modeling algorithm calculations via API; provides a predictive modeling API for clients’ use to programmatically generate solutions based on their data.
Type of business model: Creating one B2C app product for sale as a revenue-generation product; individual contracts with cities and companies to solve particular pain points, such as using predictive modeling to help a post office company better manage staff rosters (matched to sales needs) and a consultancy project to create a visualization mapping tool that can predict the risk of car accidents for a city….”

Indonesian techies crowdsource election results


Ben Bland in the Financial Times: “Three Indonesian tech experts say they have used crowdsourcing to calculate an accurate result for the country’s contested presidential election in six days, while 4m officials have been beavering away for nearly two weeks counting the votes by hand.

The Indonesian techies, who work for multinational companies, were spurred into action after both presidential candidates claimed victory and accused each other of trying to rig the convoluted counting process, raising fears that the country’s young democracy was under threat.

“We did this to prevent the nation being ripped apart because of two claims to victory that nobody can verify,” said Ainun Najib, who is based in Singapore. “This solution was only possible because all the polling station data were openly available for public scrutiny and verification.”

Mr Najib and two friends took advantage of the decision by the national election commission (KPU) to upload the individual results from Indonesia’s 480,000 polling stations to its website for the first time, in an attempt to counter widespread fears about electoral fraud.

The three Indonesians scraped the voting data from the KPU website on to a database and then recruited 700 friends and acquaintances through Facebook to type in the results and check them. They uploaded the data to a website called kawalpemilu.org, which means “guard the election” in Indonesian.

Throughout the process, Mr Najib said he had to fend off hacking attacks, forcing him to shift data storage to a cloud-based service. The whole exercise cost $10 for a domain name and $0.10 for the data storage….”

The Quiet Movement to Make Government Fail Less Often


in The New York Times: “If you wanted to bestow the grandiose title of “most successful organization in modern history,” you would struggle to find a more obviously worthy nominee than the federal government of the United States.

In its earliest stirrings, it established a lasting and influential democracy. Since then, it has helped defeat totalitarianism (more than once), established the world’s currency of choice, sent men to the moon, built the Internet, nurtured the world’s largest economy, financed medical research that saved millions of lives and welcomed eager immigrants from around the world.

Of course, most Americans don’t think of their government as particularly successful. Only 19 percent say they trust the government to do the right thing most of the time, according to Gallup. Some of this mistrust reflects a healthy skepticism that Americans have always had toward centralized authority. And the disappointing economic growth of recent decades has made Americans less enamored of nearly every national institution.

But much of the mistrust really does reflect the federal government’s frequent failures – and progressives in particular will need to grapple with these failures if they want to persuade Americans to support an active government.

When the federal government is good, it’s very, very good. When it’s bad (or at least deeply inefficient), it’s the norm.

The evidence is abundant. Of the 11 large programs for low- and moderate-income people that have been subject to rigorous, randomized evaluation, only one or two show strong evidence of improving most beneficiaries’ lives. “Less than 1 percent of government spending is backed by even the most basic evidence of cost-effectiveness,” writes Peter Schuck, a Yale law professor, in his new book, “Why Government Fails So Often,” a sweeping history of policy disappointments.

As Mr. Schuck puts it, “the government has largely ignored the ‘moneyball’ revolution in which private-sector decisions are increasingly based on hard data.”

And yet there is some good news in this area, too. The explosion of available data has made evaluating success – in the government and the private sector – easier and less expensive than it used to be. At the same time, a generation of data-savvy policy makers and researchers has entered government and begun pushing it to do better. They have built on earlier efforts by the Bush and Clinton administrations.

The result is a flowering of experiments to figure out what works and what doesn’t.

New York City, Salt Lake City, New York State and Massachusetts have all begun programs to link funding for programs to their success: The more effective they are, the more money they and their backers receive. The programs span child care, job training and juvenile recidivism.

The approach is known as “pay for success,” and it’s likely to spread to Cleveland, Denver and California soon. David Cameron’s conservative government in Britain is also using it. The Obama administration likes the idea, and two House members – Todd Young, an Indiana Republican, and John Delaney, a Maryland Democrat – have introduced a modest bill to pay for a version known as “social impact bonds.”

The White House is also pushing for an expansion of randomized controlled trials to evaluate government programs. Such trials, Mr. Schuck notes, are “the gold standard” for any kind of evaluation. Using science as a model, researchers randomly select some people to enroll in a government program and others not to enroll. The researchers then study the outcomes of the two groups….”

Selected Readings on Crowdsourcing Expertise


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.

Crowdsourcing enables leaders and citizens to work together to solve public problems in new and innovative ways. New tools and platforms enable citizens with differing levels of knowledge, expertise, experience and abilities to collaborate and solve problems together. Identifying experts, or individuals with specialized skills, knowledge or abilities with regard to a specific topic, and incentivizing their participation in crowdsourcing information, knowledge or experience to achieve a shared goal can enhance the efficiency and effectiveness of problem solving.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Börner, Katy, Michael Conlon, Jon Corson-Rikert, and Ying Ding. “VIVO: A Semantic Approach to Scholarly Networking and Discovery.” Synthesis Lectures on the Semantic Web: Theory and Technology 2, no. 1 (October 17, 2012): 1–178. http://bit.ly/17huggT.

  • This e-book “provides an introduction to VIVO…a tool for representing information about research and researchers — their scholarly works, research interests, and organizational relationships.”
  • VIVO is a response to the fact that, “Information for scholars — and about scholarly activity — has not kept pace with the increasing demands and expectations. Information remains siloed in legacy systems and behind various access controls that must be licensed or otherwise negotiated before access. Information representation is in its infancy. The raw material of scholarship — the data and information regarding previous work — is not available in common formats with common semantics.”
  • Providing access to structured information on the work and experience of a diversity of scholars enables improved expert finding — “identifying and engaging experts whose scholarly works is of value to one’s own. To find experts, one needs rich data regarding one’s own work and the work of potential related experts. The authors argue that expert finding is of increasing importance since, “[m]ulti-disciplinary and inter-disciplinary investigation is increasingly required to address complex problems. 

Bozzon, Alessandro, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci. “Choosing the Right Crowd: Expert Finding in Social Networks.” In Proceedings of the 16th International Conference on Extending Database Technology, 637–648. EDBT  ’13. New York, NY, USA: ACM, 2013. http://bit.ly/18QbtY5.

  • This paper explores the challenge of selecting experts within the population of social networks by considering the following problem: “given an expertise need (expressed for instance as a natural language query) and a set of social network members, who are the most knowledgeable people for addressing that need?”
  • The authors come to the following conclusions:
    • “profile information is generally less effective than information about resources that they directly create, own or annotate;
    • resources which are produced by others (resources appearing on the person’s Facebook wall or produced by people that she follows on Twitter) help increasing the assessment precision;
    • Twitter appears the most effective social network for expertise matching, as it very frequently outperforms all other social networks (either combined or alone);
    • Twitter appears as well very effective for matching expertise in domains such as computer engineering, science, sport, and technology & games, but Facebook is also very effective in fields such as locations, music, sport, and movies & tv;
    • surprisingly, LinkedIn appears less effective than other social networks in all domains (including computer science) and overall.”

Brabham, Daren C. “The Myth of Amateur Crowds.” Information, Communication & Society 15, no. 3 (2012): 394–410. http://bit.ly/1hdnGJV.

  • Unlike most of the related literature, this paper focuses on bringing attention to the expertise already being tapped by crowdsourcing efforts rather than determining ways to identify more dormant expertise to improve the results of crowdsourcing.
  • Brabham comes to two central conclusions: “(1) crowdsourcing is discussed in the popular press as a process driven by amateurs and hobbyists, yet empirical research on crowdsourcing indicates that crowds are largely self-selected professionals and experts who opt-in to crowdsourcing arrangements; and (2) the myth of the amateur in crowdsourcing ventures works to label crowds as mere hobbyists who see crowdsourcing ventures as opportunities for creative expression, as entertainment, or as opportunities to pass the time when bored. This amateur/hobbyist label then undermines the fact that large amounts of real work and expert knowledge are exerted by crowds for relatively little reward and to serve the profit motives of companies. 

Dutton, William H. Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government. One of a Series of Occasional Papers in Science and Technology Policy, Science and Technology Policy Institute, Institute for Defense Analyses, February 23, 2011. http://bit.ly/1c1bpEB.

  • In this paper, a case is made for more structured and well-managed crowdsourcing efforts within government. Specifically, the paper “explains how collaborative networking can be used to harness the distributed expertise of citizens, as distinguished from citizen consultation, which seeks to engage citizens — each on an equal footing.” Instead of looking for answers from an undefined crowd, Dutton proposes “networking the public as advisors” by seeking to “involve experts on particular public issues and problems distributed anywhere in the world.”
  • Dutton argues that expert-based crowdsourcing can be successfully for government for a number of reasons:
    • Direct communication with a diversity of independent experts
    • The convening power of government
    • Compatibility with open government and open innovation
    • Synergy with citizen consultation
    • Building on experience with paid consultants
    • Speed and urgency
    • Centrality of documents to policy and practice.
  • He also proposes a nine-step process for government to foster bottom-up collaboration networks:
    • Do not reinvent the technology
    • Focus on activities, not the tools
    • Start small, but capable of scaling up
    • Modularize
    • Be open and flexible in finding and going to communities of experts
    • Do not concentrate on one approach to all problems
    • Cultivate the bottom-up development of multiple projects
    • Experience networking and collaborating — be a networked individual
    • Capture, reward, and publicize success.

Goel, Gagan, Afshin Nikzad and Adish Singla. “Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.” Under review by the International World Wide Web Conference (WWW). 2014. http://bit.ly/1qHBkdf

  • Combining the notions of crowdsourcing expertise and crowdsourcing tasks, this paper focuses on the challenge within platforms like Mechanical Turk related to intelligently matching tasks to workers.
  • The authors’ call for more strategic assignment of tasks in crowdsourcing markets is based on the understanding that “each worker has certain expertise and interests which define the set of tasks she can and is willing to do.”
  • Focusing on developing meaningful incentives based on varying levels of expertise, the authors sought to create a mechanism that, “i) is incentive compatible in the sense that it is truthful for agents to report their true cost, ii) picks a set of workers and assigns them to the tasks they are eligible for in order to maximize the utility of the requester, iii) makes sure total payments made to the workers doesn’t exceed the budget of the requester.

Gubanov, D., N. Korgin, D. Novikov and A. Kalkov. E-Expertise: Modern Collective Intelligence. Springer, Studies in Computational Intelligence 558, 2014. http://bit.ly/U1sxX7

  • In this book, the authors focus on “organization and mechanisms of expert decision-making support using modern information and communication technologies, as well as information analysis and collective intelligence technologies (electronic expertise or simply e-expertise).”
  • The book, which “addresses a wide range of readers interested in management, decision-making and expert activity in political, economic, social and industrial spheres, is broken into five chapters:
    • Chapter 1 (E-Expertise) discusses the role of e-expertise in decision-making processes. The procedures of e-expertise are classified, their benefits and shortcomings are identified, and the efficiency conditions are considered.
    • Chapter 2 (Expert Technologies and Principles) provides a comprehensive overview of modern expert technologies. A special emphasis is placed on the specifics of e-expertise. Moreover, the authors study the feasibility and reasonability of employing well-known methods and approaches in e-expertise.
    • Chapter 3 (E-Expertise: Organization and Technologies) describes some examples of up-to-date technologies to perform e-expertise.
    • Chapter 4 (Trust Networks and Competence Networks) deals with the problems of expert finding and grouping by information and communication technologies.
    • Chapter 5 (Active Expertise) treats the problem of expertise stability against any strategic manipulation by experts or coordinators pursuing individual goals.

Holst, Cathrine. “Expertise and Democracy.” ARENA Report No 1/14, Center for European Studies, University of Oslo. http://bit.ly/1nm3rh4

  • This report contains a set of 16 papers focused on the concept of “epistocracy,” meaning the “rule of knowers.” The papers inquire into the role of knowledge and expertise in modern democracies and especially in the European Union (EU). Major themes are: expert-rule and democratic legitimacy; the role of knowledge and expertise in EU governance; and the European Commission’s use of expertise.
    • Expert-rule and democratic legitimacy
      • Papers within this theme concentrate on issues such as the “implications of modern democracies’ knowledge and expertise dependence for political and democratic theory.” Topics include the accountability of experts, the legitimacy of expert arrangements within democracies, the role of evidence in policy-making, how expertise can be problematic in democratic contexts, and “ethical expertise” and its place in epistemic democracies.
    • The role of knowledge and expertise in EU governance
      • Papers within this theme concentrate on “general trends and developments in the EU with regard to the role of expertise and experts in political decision-making, the implications for the EU’s democratic legitimacy, and analytical strategies for studying expertise and democratic legitimacy in an EU context.”
    • The European Commission’s use of expertise
      • Papers within this theme concentrate on how the European Commission uses expertise and in particular the European Commission’s “expertgroup system.” Topics include the European Citizen’s Initiative, analytic-deliberative processes in EU food safety, the operation of EU environmental agencies, and the autonomy of various EU agencies.

King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. http://bit.ly/HjVOpi.

  • In this paper, King and Lakhani examine different methods for opening innovation, where, “[i]nstead of doing everything in-house, companies can tap into the ideas cloud of external expertise to develop new products and services.”
  • The three types of open innovation discussed are: opening the idea-creation process, competitions where prizes are offered and designers bid with possible solutions; opening the idea-selection process, ‘approval contests’ in which outsiders vote to determine which entries should be pursued; and opening both idea generation and selection, an option used especially by organizations focused on quickly changing needs.

Long, Chengjiang, Gang Hua and Ashish Kapoor. Active Visual Recognition with Expertise Estimation in Crowdsourcing. 2013 IEEE International Conference on Computer Vision. December 2013. http://bit.ly/1lRWFur.

  • This paper is focused on improving the crowdsourced labeling of visual datasets from platforms like Mechanical Turk. The authors note that, “Although it is cheap to obtain large quantity of labels through crowdsourcing, it has been well known that the collected labels could be very noisy. So it is desirable to model the expertise level of the labelers to ensure the quality of the labels. The higher the expertise level a labeler is at, the lower the label noises he/she will produce.”
  • Based on the need for identifying expert labelers upfront, the authors developed an “active classifier learning system which determines which users to label which unlabeled examples” from collected visual datasets.
  • The researchers’ experiments in identifying expert visual dataset labelers led to findings demonstrating that the “active selection” of expert labelers is beneficial in cutting through the noise of crowdsourcing platforms.

Noveck, Beth Simone. “’Peer to Patent’: Collective Intelligence, Open Review, and Patent Reform.” Harvard Journal of Law & Technology 20, no. 1 (Fall 2006): 123–162. http://bit.ly/HegzTT.

  • This law review article introduces the idea of crowdsourcing expertise to mitigate the challenge of patent processing. Noveck argues that, “access to information is the crux of the patent quality problem. Patent examiners currently make decisions about the grant of a patent that will shape an industry for a twenty-year period on the basis of a limited subset of available information. Examiners may neither consult the public, talk to experts, nor, in many cases, even use the Internet.”
  • Peer-to-Patent, which launched three years after this article, is based on the idea that, “The new generation of social software might not only make it easier to find friends but also to find expertise that can be applied to legal and policy decision-making. This way, we can improve upon the Constitutional promise to promote the progress of science and the useful arts in our democracy by ensuring that only worth ideas receive that ‘odious monopoly’ of which Thomas Jefferson complained.”

Ober, Josiah. “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment.” American Political Science Review 107, no. 01 (2013): 104–122. http://bit.ly/1cgf857.

  • In this paper, Ober argues that, “A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world.”
  • Ober describes an approach to decision-making that aggregates expertise across multiple domains. This “Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest.”

Sims, Max H., Jeffrey Bigham, Henry Kautz and Marc W. Halterman. Crowdsourcing medical expertise in near real time.” Journal of Hospital Medicine 9, no. 7, July 2014. http://bit.ly/1kAKvq7.

  • In this article, the authors discuss the develoment of a mobile application called DocCHIRP, which was developed due to the fact that, “although the Internet creates unprecedented access to information, gaps in the medical literature and inefficient searches often leave healthcare providers’ questions unanswered.”
  • The DocCHIRP pilot project used a “system of point-to-multipoint push notifications designed to help providers problem solve by crowdsourcing from their peers.”
  • Healthcare providers (HCPs) sought to gain intelligence from the crowd, which included 85 registered users, on questions related to medication, complex medical decision making, standard of care, administrative, testing and referrals.
  • The authors believe that, “if future iterations of the mobile crowdsourcing applications can address…adoption barriers and support the organic growth of the crowd of HCPs,” then “the approach could have a positive and transformative effect on how providers acquire relevant knowledge and care for patients.”

Spina, Alessandro. “Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies.” in Foundations of EU Food Law and Policy, eds. A. Alemmano and S. Gabbi. Ashgate, 2014. http://bit.ly/1k2EwdD.

  • In this paper, Spina “presents some reflections on how the collaborative and crowdsourcing practices of Open Government could be integrated in the activities of EFSA [European Food Safety Authority] and other EU agencies,” with a particular focus on “highlighting the benefits of the Open Government paradigm for expert regulatory bodies in the EU.”
  • Spina argues that the “crowdsourcing of expertise and the reconfiguration of the information flows between European agencies and teh public could represent a concrete possibility of modernising the role of agencies with a new model that has a low financial burden and an almost immediate effect on the legal governance of agencies.”
  • He concludes that, “It is becoming evident that in order to guarantee that the best scientific expertise is provided to EU institutions and citizens, EFSA should strive to use the best organisational models to source science and expertise.”

Are the Authoritarians Winning?


Review of several books by Michael Ignatieff in the New York Review of Books: “In the 1930s travelers returned from Mussolini’s Italy, Stalin’s Russia, and Hitler’s Germany praising the hearty sense of common purpose they saw there, compared to which their own democracies seemed weak, inefficient, and pusillanimous.
Democracies today are in the middle of a similar period of envy and despondency. Authoritarian competitors are aglow with arrogant confidence. In the 1930s, Westerners went to Russia to admire Stalin’s Moscow subway stations; today they go to China to take the bullet train from Beijing to Shanghai, and just as in the 1930s, they return wondering why autocracies can build high-speed railroad lines seemingly overnight, while democracies can take forty years to decide they cannot even begin. The Francis Fukuyama moment—when in 1989 Westerners were told that liberal democracy was the final form toward which all political striving was directed—now looks like a quaint artifact of a vanished unipolar moment.
For the first time since the end of the cold war, the advance of democratic constitutionalism has stopped. The army has staged a coup in Thailand and it’s unclear whether the generals will allow democracy to take root in Burma. For every African state, like Ghana, where democratic institutions seem secure, there is a Mali, a Côte d’Ivoire, and a Zimbabwe, where democracy is in trouble.
In Latin America, democracy has sunk solid roots in Chile, but in Mexico and Colombia it is threatened by violence, while in Argentina it struggles to shake off the dead weight of Peronism. In Brazil, the millions who took to the streets last June to protest corruption seem to have had no impact on the cronyism in Brasília. In the Middle East, democracy has a foothold in Tunisia, but in Syria there is chaos; in Egypt, plebiscitary authoritarianism rules; and in the monarchies, absolutism is ascendant.
In Europe, the policy elites keep insisting that the remedy for their continent’s woes is “more Europe” while a third of their electorate is saying they want less of it. From Hungary to Holland, including in France and the UK, the anti-European right gains ground by opposing the European Union generally and immigration in particular. In Russia the democratic moment of the 1990s now seems as distant as the brief constitutional interlude between 1905 and 1914 under the tsar….
It is not at all apparent that “governance innovation,” a bauble Micklethwait and Wooldridge chase across three continents, watching innovators at work making government more efficient in Chicago, Sacramento, Singapore, and Stockholm, will do the trick. The problem of the liberal state is not that it lacks modern management technique, good software, or different schemes to improve the “interface” between the bureaucrats and the public. By focusing on government innovation, Micklethwait and Wooldridge assume that the problem is improving the efficiency of government. But what is required is both more radical and more traditional: a return to constitutional democracy itself, to courts and regulatory bodies that are freed from the power of money and the influence of the powerful; to legislatures that cease to be circuses and return to holding the executive branch to public account while cooperating on measures for which there is a broad consensus; to elected chief executives who understand that they are not entertainers but leaders….”
Books reviewed:

Reforming Taxation to Promote Growth and Equity

a white paper by Joseph Stiglitz
Roosevelt Institute, 28 pp., May 28, 2014; available at rooseveltinstitute.org

Is Crowdsourcing the Future for Legislation?


Brian Heaton in GovTech: “…While drafting legislation is traditionally the job of elected officials, an increasing number of lawmakers are using digital platforms such as Wikispaces and GitHub to give constituents a bigger hand in molding the laws they’ll be governed by. The practice has been used this year in both California and New York City, and shows no signs of slowing down anytime soon, experts say.
Trond Undheim, crowdsourcing expert and founder of Yegii Inc., a startup company that provides and ranks advanced knowledge assets in the areas of health care, technology, energy and finance, said crowdsourcing was “certainly viable” as a tool to help legislators understand what constituents are most passionate about.
“I’m a big believer in asking a wide variety of people the same question and crowdsourcing has become known as the long-tail of answers,” Undheim said. “People you wouldn’t necessarily think of have something useful to say.”
California Assemblyman Mike Gatto, D-Los Angeles, agreed. He’s spearheaded an effort this year to let residents craft legislation regarding probate law — a measure designed to allow a court to assign a guardian to a deceased person’s pet. Gatto used the online Wikispaces platform — which allows for Wikipedia-style editing and content contribution — to let anyone with an Internet connection collaborate on the legislation over a period of several months.
The topic of the bill may not have been headline news, but Gatto was encouraged by the media attention his experiment received. As a result, he’s committed to running another crowdsourced bill next year — just on a bigger, more mainstream public issue.
New York City Council Member Ben Kallos has a plethora of technology-related legislation being considered in the Big Apple. Many of the bills are open for public comment and editing on GitHub. In an interview with Government Technology last month, Kallos said he believes using crowdsourcing to comment on and edit legislation is empowering and creates a different sense of democracy where people can put forward their ideas.
County governments also are joining the crowdsourcing trend. The Catawba Regional Council of Governments in South Carolina and the Centralia Council of Governments in North Carolina are gathering opinions on how county leaders should plan for future growth in the region.
At a public forum earlier this year, attendees were given iPads to go online and review four growth options and record their views on which they preferred. The priorities outlined by citizens will be taken back to decision-makers in each of the counties to see how well existing plans match up with what the public wants.
Gatto said he’s encouraged by how quickly the crowdsourcing of policy has spread throughout the U.S. He said there’s a disconnect between governments and their constituencies who believe elected officials don’t listen. But that could change as crowdsourcing continues to make its impact on lawmakers.
“When you put out a call like I did and others have done and say ‘I’m going to let the public draft a law and whatever you draft, I’m committed to introducing it … I think that’s a powerful message,” Gatto said. “I think the public appreciates it because it makes them understand that the government still belongs to them.”

Protecting the Process

Despite the benefits crowdsourcing brings to the legislative process, there remain some question marks about whether it truly provides insight into the public’s feelings on an issue. For example, because many political issues are driven by the influence of special interest groups, what’s preventing those groups from manipulating the bill-drafting process?
Not much, according to Undheim. He cautioned policymakers to be aware of the motivations from people taking part in crowdsourcing efforts to write and edit laws. Gatto shared Undheim’s concerns, but noted that the platform he used for developing his probate law – Wikispaces – has safeguards in place so that a member of his staff can revert language of a crowdsourced bill back to a previous version if it’s determined that someone was trying to unduly influence the drafting process….”

What is democracy? A reconceptualization of the quality of democracy


Paper by Gerardo L. Munck: “Works on the quality of democracy propose standards for evaluating politics beyond those encompassed by a minimal definition of democracy. Yet, what is the quality of democracy? This article first reconstructs and assesses current conceptualizations of the quality of democracy. Thereafter, it reconceptualizes the quality of democracy by equating it with democracy pure and simple, positing that democracy is a synthesis of political freedom and political equality, and spelling out the implications of this substantive assumption. The proposal is to broaden the concept of democracy to address two additional spheres: government decision-making – political institutions are democratic inasmuch as a majority of citizens can change the status quo – and the social environment of politics – the social context cannot turn the principles of political freedom and equality into mere formalities. Alternative specifications of democratic standards are considered and reasons for discarding them are provided.”

Making We the People More User-Friendly Than Ever


The White House: “With more than 14 million users and 21 million signatures, We the People, the White House’s online petition platform, has proved more popular than we ever thought possible. In the nearly three years since launch, we’ve heard from you on a huge range of topics, and issued more than 225 responses.

But we’re not stopping there. We’ve been working to make it easier to sign a petition and today we’re proud to announce the next iteration of We the People.

Since launch, we’ve heard from users who wanted a simpler, more streamlined way to sign petitions without creating an account and logging in every time. This latest update makes that a reality.

We’re calling it “simplified signing” and it takes the account creation step out of signing a petition. As of today, just enter your basic information, confirm your signature via email and you’re done. That’s it. No account to create, no logging in, no passwords to remember.

We the People User Statistics

That’s great news for new users, but we’re betting it’ll be welcomed by our returning signers, too. If you signed a petition six months ago and you don’t remember your password, you don’t have to worry about resetting it. Just enter your email address, confirm your signature, and you’re done.

Go check it out right now on petitions.whitehouse.gov.

We Need a Citizen Maker Movement


Lorelei Kelly at the Huffington Post: “It was hard to miss the giant mechanical giraffe grazing on the White House lawn last week. For the first time ever, the President organized a Maker Faire–inviting entrepreneurs and inventors from across the USA to celebrate American ingenuity in the service of economic progress.
The maker movement is a California original. Think R2D2 serving margaritas to a jester with an LED news scroll. The #nationofmakers Twitter feed has dozens of examples of collaborative production, of making, sharing and learning.
But since this was the White House, I still had to ask myself, what would the maker movement be if the economy was not the starting point? What if it was about civics? What if makers decided to create a modern, hands-on democracy?
What is democracy anyway but a never ending remix of new prototypes? Last week’s White House Maker Faire heralded a new economic bonanza. This revolution’s poster child is 3-D printing– decentralized fabrication that is customized to meet local needs. On the government front, new design rules for democracy are already happening in communities, where civics and technology have generated a front line of maker cities.
But the distance between California’s tech capacity and DC does seem 3000 miles wide. The NSA’s over collection/surveillance problem and Healthcare.gov’s doomed rollout are part of the same system-wide capacity deficit. How do we close the gap between California’s revolution and our institutions?

  • In California, disruption is a business plan. In DC, it’s a national security threat.
  • In California, hackers are artists. In DC, they are often viewed as criminals.
  • In California, “cyber” is a dystopian science fiction word. In DC, cyber security is in a dozen oversight plans for Congress.
  • in California, individuals are encouraged to “fail forward.” In DC, risk-aversion is bipartisan.

Scaling big problems with local solutions is a maker specialty. Government policymaking needs this kind of help.
Here’s the issue our nation is facing: The inability of the non-military side of our public institutions to process complex problems. Today, this competence and especially the capacity to solve technical challenges often exist only in the private sector. If something is urgent and can’t be monetized, it becomes a national security problem. Which increasingly means that critical decision making that should be in the civilian remit instead migrates to the military. Look at our foreign policy. Good government is a counter terrorism strategy in Afghanistan. Decades of civilian inaction on climate change means that now Miami is referred to as a battle space in policy conversations.
This rhetoric reflects an understandable but unacceptable disconnect for any democracy.
To make matters more confusing, much of the technology in civics (like list building petitions) is suited for elections, not for governing. It is often antagonistic. The result? policy making looks like campaigning. We need some civic tinkering to generate governing technology that comes with relationships. Specifically, this means technology that includes many voices, but has identifiable channels for expertise that can sort complexity and that is not compromised by financial self-interest.
Today, sorting and filtering information is a huge challenge for participation systems around the world. Information now ranks up there with money and people as a lever of power. On the people front, the loud and often destructive individuals are showing up effectively. On the money front, our public institutions are at risk of becoming purely pay to play (wonks call this “transactional”).
Makers, ask yourselves, how can we turn big data into a political constituency for using real evidence–one that can compete with all the negative noise and money in the system? For starters, technologists out West must stop treating government like it’s a bad signal that can be automated out of existence. We are at a moment where our society requires an engineering mindset to develop modern, tech-savvy rules for democracy. We need civic makers….”

Let's amplify California's collective intelligence


Gavin Newsom and Ken Goldberg at the SFGate: “Although the results of last week’s primary election are still being certified, we already know that voter turnout was among the lowest in California’s history. Pundits will rant about the “cynical electorate” and wag a finger at disengaged voters shirking their democratic duties, but we see the low turnout as a symptom of broader forces that affect how people and government interact.
The methods used to find out what citizens think and believe are limited to elections, opinion polls, surveys and focus groups. These methods may produce valuable information, but they are costly, infrequent and often conducted at the convenience of government or special interests.
We believe that new technology has the potential to increase public engagement by tapping the collective intelligence of Californians every day, not just on election day.
While most politicians already use e-mail and social media, these channels are easily dominated by extreme views and tend to regurgitate material from mass media outlets.
We’re exploring an alternative.
The California Report Card is a mobile-friendly web-based platform that streamlines and organizes public input for the benefit of policymakers and elected officials. The report card allows participants to assign letter grades to key issues and to suggest new ideas for consideration; public officials then can use that information to inform their decisions.
In an experimental version of the report card released earlier this year, residents from all 58 counties assigned more than 20,000 grades to the state of California and also suggested issues they feel deserve priority at the state level. As one participant noted: “This platform allows us to have our voices heard. The ability to review and grade what others suggest is important. It enables elected officials to hear directly how Californians feel.”
Initial data confirm that Californians approve of our state’s rollout of Obamacare, but are very concerned about the future of our schools and universities.
There was also a surprise. California Report Card suggestions for top state priorities revealed consistently strong interest and support for more attention to disaster preparedness. Issues related to this topic were graded as highly important by a broad cross section of participants across the state. In response, we’re testing new versions of the report card that can focus on topics related to wildfires and earthquakes.
The report card is part of an ongoing collaboration between the CITRIS Data and Democracy Initiative at UC Berkeley and the Office of the Lieutenant Governor to explore how technology can improve public communication and bring the government closer to the people. Our hunch is that engineering concepts can be adapted for public policy to rapidly identify real insights from constituents and resist gaming by special interests.
You don’t have to wait for the next election to have your voice heard by officials in Sacramento. The California Report Card is now accessible from cell phones, desktop and tablet computers. We encourage you to contribute your own ideas to amplify California’s collective intelligence. It’s easy, just click “participate” on this website: CaliforniaReportCard.org”