The GovLab presents Demos for Democracy, an ongoing series of live, interactive online demos featuring designers and builders of the latest innovative governance platforms, tools or methods to foster greater openness and collaboration to how we govern.
Who: remesh, founded by PhD students Andrew Konya and Aaron Slodov, is an online public platform that offers a community, group, nation or planet of people the ability to speak with one voice that represents the collective thinking of all people within the group. remesh was prototyped at a HacKSU hackathon early in 2013 and has been under development over the past year.
What: Join us for a live demonstration of how remesh works before their official public launch. Participants will be given a link to test the platform during the live Google hangout. More information on what remesh does can be found here.
When: July 29, 2014, 2:00 – 2:30 PM EST
Where: Online via Google Hangouts on Air. To RSVP and join, go to the Hangout Link. This event will be live tweeted at #democracydemos.
Bios:
Andrew Konya (CEO/Founder) is a PhD student in computational/theoretical physics at Kent State University. With extensive experience developing and implementing mathematical models for natural and man-made systems, Andrew brings a creative yet and versatile technical toolbox. This expertise, in concert with his passion for linguistics, led him to develop the first mathematical framework for collective speech. His goal is the completion of a conversation platform, built on this framework, which can make conversations between countries in conflict a viable alternative to war.
Aaron Slodov (COO/Founder) is a current power systems engineering PhD student at Case Western Reserve University. A previous engineer at both Google and Meetup.com, Aaron is experienced in the tech landscape, and understands many of the current problems in the space. By enabling remesh technology he hopes to bring significant paradigm-shifting change to the way we communicate and interact with our world.
RSVP and JOIN
We hope to see you on Tuesday! If you have any questions, email us at [email protected].
Recent progress in Open Data production and consumption
Examples from a Governmental institute (SMHI) and a collaborative EU research project (SWITCH-ON) by Arheimer, Berit; and Falkenroth, Esa: “The Swedish Meteorological and Hydrological Institute (SMHI) has a long tradition both in producing and consuming open data on a national, European and global scale. It is also promoting community building among water scientists in Europe by participating in and initiating collaborative projects. This presentation will exemplify the contemporary European movement imposed by the INSPIRE directive and the Open Data Strategy, by showing the progress in openness and shift in attitudes during the last decade when handling Research Data and Public Sector Information at a national European institute. Moreover, the presentation will inform about a recently started collaborative project (EU FP7 project No 603587) coordinated by SMHI and called SWITCH-ON http://water-switch-on.eu/. The project addresses water concerns and currently untapped potential of open data for improved water management across the EU. The overall goal of the project is to make use of open data, and add value to society by repurposing and refining data from various sources. SWITCH-ON will establish new forms of water research and facilitate the development of new products and services based on principles of sharing and community building in the water society. The SWITCH-ON objectives are to use open data for implementing: 1) an innovative spatial information platform with open data tailored for direct water assessments, 2) an entirely new form of collaborative research for water-related sciences, 3) fourteen new operational products and services dedicated to appointed end-users, 4) new business and knowledge to inform individual and collective decisions in line with the Europe’s smart growth and environmental objectives. The presentation will discuss challenges, progress and opportunities with the open data strategy, based on the experiences from working both at a Governmental institute and being part of the global research community.”
Generative Emergence: A New Discipline of Organizational, Entrepreneurial, and Social Innovation
New book by Benyamin Lichtenstein: “Culminating more than 30 years of research into evolution, complexity science, organizing and entrepreneurship, this book provides insights to scholars who are increasingly using emergence to explain social phenomena. In addition to providing the first comprehensive definition and framework for understanding emergence, it is the first publication of data from a year-long experimental study of emergence in high-potential ventures—a week-by-week longitudinal analysis of their processes based on over 750 interviews and 1000 hours of on-site observation. These data, combined with reports from over a dozen other studies, confirm the dynamics of the five phase model in multiple contexts…
- Findings which show a major difference between an aspiration that generates a purposive drive for generative emergence, versus a performance-driven crisis that sparks organizational change and transformation. This difference has important implications for studies of entrepreneurship, innovation, and social change.
- A definition of emergence based on 100+ years of work in philosophy and philosophy of science, evolutionary studies, sociology, and organization science.
- The most inclusive review of complexity science published, to help reinvigorate and legitimize those methods in the social sciences.
- The Dynamic States Model—a new approach for understanding the non-linear growth and development of new ventures.
- In-depth examinations of more than twenty well-known emergence studies, to reveal their shared dynamics and underlying drivers.
- Proposals for applying the five-phase model—as a logic of emergence—to social innovation, organizational leadership, and entrepreneurial development.”
Privacy-Invading Technologies and Privacy by Design
New book by Demetrius Klitou: “Challenged by rapidly developing privacy-invading technologies (PITs), this book provides a convincing set of potential policy recommendations and practical solutions for safeguarding both privacy and security. It shows that benefits such as public security do not necessarily come at the expense of privacy and liberty overall.
Backed up by comprehensive study of four specific PITs – Body scanners; Public space CCTV microphones; Public space CCTV loudspeakers; and Human-implantable microchips (RFID implants/GPS implants) – the author shows how laws that regulate the design and development of PITs may more effectively protect privacy than laws that only regulate data controllers and the use of such technologies. New rules and regulations should therefore incorporate fundamental privacy principles through what is known as ‘Privacy by Design’.
The numerous sources explored by the author provide a workable overview of the positions of academia, industry, government and relevant international organizations and NGOs.
- Explores a relatively novel approach of protecting privacy
- Offers a convincing set of potential policy recommendations and practical solutions
- Provides a workable overview of the positions of academia, industry, government and relevant international organizations and NGOs”
Business Models That Take Advantage of Open Data Opportunities
In a session held on the first day of the event, Borlongan facilitated an interactive workshop to help would-be entrepreneurs understand how startups are building business models that take advantage of open data opportunities to create sustainable, employment-generating businesses.
Citing research from the McKinsey Institute that calculates the value of open data to be worth $3 trillion globally, Borlongan said: “So the understanding of the open data process is usually: We throw open data over the wall, then we hold a hackathon, and then people will start making products off it, and then we make the $3 trillion.”
Borlongan argued that it is actually a “blurry identity to be an open data startup” and encouraged participants to unpack, with each of the startups presenting exactly how income can be generated and a viable business built in this space.
Jeni Tennison, from the U.K.’s Open Data Institute (which supports 15 businesses in its Startup Programme) categorizes two types of business models:
- Businesses that publish (but do not sell) open data.
- Businesses built on top of using open data.
Businesses That Publish but Do Not Sell Open Data
At the Open Data Institute, Tennison is investigating the possibility of an open address database that would provide street address data for every property in the U.K. She describes three types of business models that could be created by projects that generated and published such data:
Freemium: In this model, the bulk data of open addresses could be made available freely, “but if you want an API service, then you would pay for it.” Tennison pointed to lots of opportunities also to degrade the freemium-level data—for example, having it available in bulk but not at a particularly granular level (unless you pay for it), or by provisioning reuse on a share-only basis, but you would pay if you wanted the data for corporate use cases (similar to how OpenCorporates sells access to its data).
Cross-subsidy: In this approach, the data would be available, and the opportunities to generate income would come from providing extra services, like consultancy or white labeling data services alongside publishing the open data.
Network: In this business model, value is created by generating a network effect around the core business interest, which may not be the open data itself. As an example, Tennison suggested that if a post office or delivery company were to create the open address database, it might be interested in encouraging private citizens to collaboratively maintain or crowdsource the quality of the data. The revenue generated by this open data would then come from reductions in the cost of delivery services as the data improved accuracy.
Businesses Built on Top of Open Data
Six startups working in unique ways to make use of available open data also presented their business models to OKFestival attendees: Development Seed, Mapbox, OpenDataSoft, Enigma.io, Open Bank API, and Snips.

Startup: Development Seed
What it does: Builds solutions for development, public health and citizen democracy challenges by creating open source tools and utilizing open data.
Open data API focus: Regularly uses open data APIs in its projects. For example, it worked with the World Bank to create a data visualization website built on top of the World Bank API.
Type of business model: Consultancy, but it has also created new businesses out of the products developed as part of its work, most notably Mapbox (see below).

Startup: Enigma.io
What it does: Open data platform with advanced discovery and search functions.
Open data API focus: Provides the Enigma API to allow programmatic access to all data sets and some analytics from the Enigma platform.
Type of business model: SaaS including a freemium plan with no degradation of data and with access to API calls; some venture funding; some contracting services to particular enterprises; creating new products in Enigma Labs for potential later sale.

Startup: Mapbox
What it does: Enables users to design and publish maps based on crowdsourced OpenStreetMap data.
Open data API focus: Uses OpenStreetMap APIs to draw data into its map-creation interface; provides the Mapbox API to allow programmatic creation of maps using Mapbox web services.
Type of business model: SaaS including freemium plan; some tailored contracts for big map users such as Foursquare and Evernote.

Startup: Open Bank Project
What it does: Creates an open source API for use by banks.
Open data API focus: Its core product is to build an API so that banks can use a standard, open source API tool when creating applications and web services for their clients.
Type of business model: Contract license with tiered SLAs depending on the number of applications built using the API; IT consultancy projects.

Startup: OpenDataSoft
What it does: Provides an open data publishing platform so that cities, governments, utilities and companies can publish their own data portal for internal and public use.
Open data API focus: It’s able to route data sources into the portal from a publisher’s APIs; provides automatic API-creation tools so that any data set uploaded to the portal is then available as an API.
Type of business model: SaaS model with freemium plan, pricing by number of data sets published and number of API calls made against the data, with free access for academic and civic initiatives.

Startup: Snips
What it does: Predictive modeling for smart cities.
Open data API focus: Channels some open and client proprietary data into its modeling algorithm calculations via API; provides a predictive modeling API for clients’ use to programmatically generate solutions based on their data.
Type of business model: Creating one B2C app product for sale as a revenue-generation product; individual contracts with cities and companies to solve particular pain points, such as using predictive modeling to help a post office company better manage staff rosters (matched to sales needs) and a consultancy project to create a visualization mapping tool that can predict the risk of car accidents for a city….”
Neuroeconomics, Judgment, and Decision Making
New edited book by Evan A. Wilhelms, and Valerie F. Reyna: “This volume explores how and why people make judgments and decisions that have economic consequences, and what the implications are for human well-being. It provides an integrated review of the latest research from many different disciplines, including social, cognitive, and developmental psychology; neuroscience and neurobiology; and economics and business.
The book takes a broad perspective and is written in an accessible way so as to reach a wide audience of advanced students and researchers interested in behavioral economics and related areas. This includes neuroscientists, neuropsychologists, clinicians, psychologists (developmental, social, and cognitive), economists and other social scientists; legal scholars and criminologists; professionals in public health and medicine; educators; evidence-based practitioners; and policy-makers.”
Introduction to Open Geospatial Consortium (OGC) Standards
Selected Readings on Crowdsourcing Expertise
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Crowdsourcing enables leaders and citizens to work together to solve public problems in new and innovative ways. New tools and platforms enable citizens with differing levels of knowledge, expertise, experience and abilities to collaborate and solve problems together. Identifying experts, or individuals with specialized skills, knowledge or abilities with regard to a specific topic, and incentivizing their participation in crowdsourcing information, knowledge or experience to achieve a shared goal can enhance the efficiency and effectiveness of problem solving.
Selected Reading List (in alphabetical order)
- Katy Börner, Michael Conlon, Jon Corson-Rikert, and Ying Ding — VIVO: A Semantic Approach to Scholarly Networking and Discovery — an introduction to VIVO, a tool for representing information about researchers’ expertise and organizational relationships.
- Alessandro Bozzon, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci — Choosing the Right Crowd: Expert Finding in Social Networks — a paper exploring the challenge of identifying the expertise needed for a given problem through the use of social networks.
- Daren C. Brabham — The Myth of Amateur Crowds — a paper arguing that, contrary to popular belief, experts are more prevalent in crowdsourcing projects than hobbyists and amateurs.
- William H. Dutton — Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government — a paper arguing for more structured and well-managed crowdsourcing efforts within government to help harness the distributed expertise of citizens.
- Gagan Goel, Afshin Nikzad and Adish Singla – Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets – a paper exploring the intelligent tasking of Mechanical Turk workers based on varying levels of expertise.
- D. Gubanov, N. Korgin, D. Novikov and A. Kalkov – E-Expertise: Modern Collective Intelligence – an ebook focusing on the organizations and mechanisms of expert decisionmaking.
- Cathrine Holst – Expertise and Democracy – a collection of papers on the role of knowledge and expertise in modern democracies.
- Andrew King and Karim R. Lakhani — Using Open Innovation to Identify the Best Ideas — a paper examining different methods for opening innovation and tapping the “ideas cloud” of external expertise.
- Chengjiang Long, Gang Hua and Ashish Kapoor – Active Visual Recognition with Expertise Estimation in Crowdsourcing – a paper proposing a mechanism for identifying experts in a Mechanical Turk project.
- Beth Simone Noveck — “Peer to Patent”: Collective Intelligence, Open Review, and Patent Reform — a law review article introducing the idea of crowdsourcing expertise to mitigate the challenge of patent processing.
- Josiah Ober — Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment — a paper discussing the Relevant Expertise Aggregation (REA) model for improving democratic decision-making.
- Max H. Sims, Jeffrey Bigham, Henry Kautz and Marc W. Halterman – Crowdsourcing medical expertise in near real time – a paper describing the development of a mobile application to give healthcare providers with better access to expertise.
- Alessandro Spina – Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies – a paper proposing increased crowdsourcing of expertise within the European Food Safety Authority.
Annotated Selected Reading List (in alphabetical order)
Börner, Katy, Michael Conlon, Jon Corson-Rikert, and Ying Ding. “VIVO: A Semantic Approach to Scholarly Networking and Discovery.” Synthesis Lectures on the Semantic Web: Theory and Technology 2, no. 1 (October 17, 2012): 1–178. http://bit.ly/17huggT.
- This e-book “provides an introduction to VIVO…a tool for representing information about research and researchers — their scholarly works, research interests, and organizational relationships.”
- VIVO is a response to the fact that, “Information for scholars — and about scholarly activity — has not kept pace with the increasing demands and expectations. Information remains siloed in legacy systems and behind various access controls that must be licensed or otherwise negotiated before access. Information representation is in its infancy. The raw material of scholarship — the data and information regarding previous work — is not available in common formats with common semantics.”
- Providing access to structured information on the work and experience of a diversity of scholars enables improved expert finding — “identifying and engaging experts whose scholarly works is of value to one’s own. To find experts, one needs rich data regarding one’s own work and the work of potential related experts. The authors argue that expert finding is of increasing importance since, “[m]ulti-disciplinary and inter-disciplinary investigation is increasingly required to address complex problems.
Bozzon, Alessandro, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci. “Choosing the Right Crowd: Expert Finding in Social Networks.” In Proceedings of the 16th International Conference on Extending Database Technology, 637–648. EDBT ’13. New York, NY, USA: ACM, 2013. http://bit.ly/18QbtY5.
- This paper explores the challenge of selecting experts within the population of social networks by considering the following problem: “given an expertise need (expressed for instance as a natural language query) and a set of social network members, who are the most knowledgeable people for addressing that need?”
- The authors come to the following conclusions:
- “profile information is generally less effective than information about resources that they directly create, own or annotate;
- resources which are produced by others (resources appearing on the person’s Facebook wall or produced by people that she follows on Twitter) help increasing the assessment precision;
- Twitter appears the most effective social network for expertise matching, as it very frequently outperforms all other social networks (either combined or alone);
- Twitter appears as well very effective for matching expertise in domains such as computer engineering, science, sport, and technology & games, but Facebook is also very effective in fields such as locations, music, sport, and movies & tv;
- surprisingly, LinkedIn appears less effective than other social networks in all domains (including computer science) and overall.”
Brabham, Daren C. “The Myth of Amateur Crowds.” Information, Communication & Society 15, no. 3 (2012): 394–410. http://bit.ly/1hdnGJV.
- Unlike most of the related literature, this paper focuses on bringing attention to the expertise already being tapped by crowdsourcing efforts rather than determining ways to identify more dormant expertise to improve the results of crowdsourcing.
- Brabham comes to two central conclusions: “(1) crowdsourcing is discussed in the popular press as a process driven by amateurs and hobbyists, yet empirical research on crowdsourcing indicates that crowds are largely self-selected professionals and experts who opt-in to crowdsourcing arrangements; and (2) the myth of the amateur in crowdsourcing ventures works to label crowds as mere hobbyists who see crowdsourcing ventures as opportunities for creative expression, as entertainment, or as opportunities to pass the time when bored. This amateur/hobbyist label then undermines the fact that large amounts of real work and expert knowledge are exerted by crowds for relatively little reward and to serve the profit motives of companies.
Dutton, William H. Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government. One of a Series of Occasional Papers in Science and Technology Policy, Science and Technology Policy Institute, Institute for Defense Analyses, February 23, 2011. http://bit.ly/1c1bpEB.
- In this paper, a case is made for more structured and well-managed crowdsourcing efforts within government. Specifically, the paper “explains how collaborative networking can be used to harness the distributed expertise of citizens, as distinguished from citizen consultation, which seeks to engage citizens — each on an equal footing.” Instead of looking for answers from an undefined crowd, Dutton proposes “networking the public as advisors” by seeking to “involve experts on particular public issues and problems distributed anywhere in the world.”
- Dutton argues that expert-based crowdsourcing can be successfully for government for a number of reasons:
- Direct communication with a diversity of independent experts
- The convening power of government
- Compatibility with open government and open innovation
- Synergy with citizen consultation
- Building on experience with paid consultants
- Speed and urgency
- Centrality of documents to policy and practice.
- He also proposes a nine-step process for government to foster bottom-up collaboration networks:
- Do not reinvent the technology
- Focus on activities, not the tools
- Start small, but capable of scaling up
- Modularize
- Be open and flexible in finding and going to communities of experts
- Do not concentrate on one approach to all problems
- Cultivate the bottom-up development of multiple projects
- Experience networking and collaborating — be a networked individual
- Capture, reward, and publicize success.
Goel, Gagan, Afshin Nikzad and Adish Singla. “Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.” Under review by the International World Wide Web Conference (WWW). 2014. http://bit.ly/1qHBkdf
- Combining the notions of crowdsourcing expertise and crowdsourcing tasks, this paper focuses on the challenge within platforms like Mechanical Turk related to intelligently matching tasks to workers.
- The authors’ call for more strategic assignment of tasks in crowdsourcing markets is based on the understanding that “each worker has certain expertise and interests which define the set of tasks she can and is willing to do.”
- Focusing on developing meaningful incentives based on varying levels of expertise, the authors sought to create a mechanism that, “i) is incentive compatible in the sense that it is truthful for agents to report their true cost, ii) picks a set of workers and assigns them to the tasks they are eligible for in order to maximize the utility of the requester, iii) makes sure total payments made to the workers doesn’t exceed the budget of the requester.
Gubanov, D., N. Korgin, D. Novikov and A. Kalkov. E-Expertise: Modern Collective Intelligence. Springer, Studies in Computational Intelligence 558, 2014. http://bit.ly/U1sxX7
- In this book, the authors focus on “organization and mechanisms of expert decision-making support using modern information and communication technologies, as well as information analysis and collective intelligence technologies (electronic expertise or simply e-expertise).”
- The book, which “addresses a wide range of readers interested in management, decision-making and expert activity in political, economic, social and industrial spheres, is broken into five chapters:
- Chapter 1 (E-Expertise) discusses the role of e-expertise in decision-making processes. The procedures of e-expertise are classified, their benefits and shortcomings are identified, and the efficiency conditions are considered.
- Chapter 2 (Expert Technologies and Principles) provides a comprehensive overview of modern expert technologies. A special emphasis is placed on the specifics of e-expertise. Moreover, the authors study the feasibility and reasonability of employing well-known methods and approaches in e-expertise.
- Chapter 3 (E-Expertise: Organization and Technologies) describes some examples of up-to-date technologies to perform e-expertise.
- Chapter 4 (Trust Networks and Competence Networks) deals with the problems of expert finding and grouping by information and communication technologies.
- Chapter 5 (Active Expertise) treats the problem of expertise stability against any strategic manipulation by experts or coordinators pursuing individual goals.
Holst, Cathrine. “Expertise and Democracy.” ARENA Report No 1/14, Center for European Studies, University of Oslo. http://bit.ly/1nm3rh4
- This report contains a set of 16 papers focused on the concept of “epistocracy,” meaning the “rule of knowers.” The papers inquire into the role of knowledge and expertise in modern democracies and especially in the European Union (EU). Major themes are: expert-rule and democratic legitimacy; the role of knowledge and expertise in EU governance; and the European Commission’s use of expertise.
- Expert-rule and democratic legitimacy
- Papers within this theme concentrate on issues such as the “implications of modern democracies’ knowledge and expertise dependence for political and democratic theory.” Topics include the accountability of experts, the legitimacy of expert arrangements within democracies, the role of evidence in policy-making, how expertise can be problematic in democratic contexts, and “ethical expertise” and its place in epistemic democracies.
- The role of knowledge and expertise in EU governance
- Papers within this theme concentrate on “general trends and developments in the EU with regard to the role of expertise and experts in political decision-making, the implications for the EU’s democratic legitimacy, and analytical strategies for studying expertise and democratic legitimacy in an EU context.”
- The European Commission’s use of expertise
- Papers within this theme concentrate on how the European Commission uses expertise and in particular the European Commission’s “expertgroup system.” Topics include the European Citizen’s Initiative, analytic-deliberative processes in EU food safety, the operation of EU environmental agencies, and the autonomy of various EU agencies.
- Expert-rule and democratic legitimacy
King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. http://bit.ly/HjVOpi.
- In this paper, King and Lakhani examine different methods for opening innovation, where, “[i]nstead of doing everything in-house, companies can tap into the ideas cloud of external expertise to develop new products and services.”
- The three types of open innovation discussed are: opening the idea-creation process, competitions where prizes are offered and designers bid with possible solutions; opening the idea-selection process, ‘approval contests’ in which outsiders vote to determine which entries should be pursued; and opening both idea generation and selection, an option used especially by organizations focused on quickly changing needs.
Long, Chengjiang, Gang Hua and Ashish Kapoor. “Active Visual Recognition with Expertise Estimation in Crowdsourcing.” 2013 IEEE International Conference on Computer Vision. December 2013. http://bit.ly/1lRWFur.
- This paper is focused on improving the crowdsourced labeling of visual datasets from platforms like Mechanical Turk. The authors note that, “Although it is cheap to obtain large quantity of labels through crowdsourcing, it has been well known that the collected labels could be very noisy. So it is desirable to model the expertise level of the labelers to ensure the quality of the labels. The higher the expertise level a labeler is at, the lower the label noises he/she will produce.”
- Based on the need for identifying expert labelers upfront, the authors developed an “active classifier learning system which determines which users to label which unlabeled examples” from collected visual datasets.
- The researchers’ experiments in identifying expert visual dataset labelers led to findings demonstrating that the “active selection” of expert labelers is beneficial in cutting through the noise of crowdsourcing platforms.
Noveck, Beth Simone. “’Peer to Patent’: Collective Intelligence, Open Review, and Patent Reform.” Harvard Journal of Law & Technology 20, no. 1 (Fall 2006): 123–162. http://bit.ly/HegzTT.
- This law review article introduces the idea of crowdsourcing expertise to mitigate the challenge of patent processing. Noveck argues that, “access to information is the crux of the patent quality problem. Patent examiners currently make decisions about the grant of a patent that will shape an industry for a twenty-year period on the basis of a limited subset of available information. Examiners may neither consult the public, talk to experts, nor, in many cases, even use the Internet.”
- Peer-to-Patent, which launched three years after this article, is based on the idea that, “The new generation of social software might not only make it easier to find friends but also to find expertise that can be applied to legal and policy decision-making. This way, we can improve upon the Constitutional promise to promote the progress of science and the useful arts in our democracy by ensuring that only worth ideas receive that ‘odious monopoly’ of which Thomas Jefferson complained.”
Ober, Josiah. “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment.” American Political Science Review 107, no. 01 (2013): 104–122. http://bit.ly/1cgf857.
- In this paper, Ober argues that, “A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world.”
- Ober describes an approach to decision-making that aggregates expertise across multiple domains. This “Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest.”
Sims, Max H., Jeffrey Bigham, Henry Kautz and Marc W. Halterman. “Crowdsourcing medical expertise in near real time.” Journal of Hospital Medicine 9, no. 7, July 2014. http://bit.ly/1kAKvq7.
- In this article, the authors discuss the develoment of a mobile application called DocCHIRP, which was developed due to the fact that, “although the Internet creates unprecedented access to information, gaps in the medical literature and inefficient searches often leave healthcare providers’ questions unanswered.”
- The DocCHIRP pilot project used a “system of point-to-multipoint push notifications designed to help providers problem solve by crowdsourcing from their peers.”
- Healthcare providers (HCPs) sought to gain intelligence from the crowd, which included 85 registered users, on questions related to medication, complex medical decision making, standard of care, administrative, testing and referrals.
- The authors believe that, “if future iterations of the mobile crowdsourcing applications can address…adoption barriers and support the organic growth of the crowd of HCPs,” then “the approach could have a positive and transformative effect on how providers acquire relevant knowledge and care for patients.”
Spina, Alessandro. “Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies.” in Foundations of EU Food Law and Policy, eds. A. Alemmano and S. Gabbi. Ashgate, 2014. http://bit.ly/1k2EwdD.
- In this paper, Spina “presents some reflections on how the collaborative and crowdsourcing practices of Open Government could be integrated in the activities of EFSA [European Food Safety Authority] and other EU agencies,” with a particular focus on “highlighting the benefits of the Open Government paradigm for expert regulatory bodies in the EU.”
- Spina argues that the “crowdsourcing of expertise and the reconfiguration of the information flows between European agencies and teh public could represent a concrete possibility of modernising the role of agencies with a new model that has a low financial burden and an almost immediate effect on the legal governance of agencies.”
- He concludes that, “It is becoming evident that in order to guarantee that the best scientific expertise is provided to EU institutions and citizens, EFSA should strive to use the best organisational models to source science and expertise.”
GitHub: A Swiss Army knife for open government
FCW: “Today, more than 300 government agencies are using the platform for public and private development. Cities (Chicago, Philadelphia, San Francisco), states (New York, Washington, Utah) and countries (United Kingdom, Australia) are sharing code and paving a new road to civic collaboration….
Civic-focused organizations — such as the OpenGov Foundation, the Sunlight Foundation and the Open Knowledge Foundation — are also actively involved with original projects on GitHub. Those projects include the OpenGov Foundation’s Madison document-editing tool touted by the likes of Rep. Darrell Issa (R-Calif.) and the Open Knowledge Foundation’s CKAN, which powers hundreds of government data platforms around the world.
According to GovCode, an aggregator of public government open-source projects hosted on GitHub, there have been hundreds of individual contributors and nearly 90,000 code commits, which involve making a set of tentative changes permanent.
The nitty-gritty
Getting started on GitHub is similar to the process for other social networking platforms. Users create individual accounts and can set up “organizations” for agencies or cities. They can then create repositories (or repos) to collaborate on projects through an individual or organizational account. Other developers or organizations can download repo code for reuse or repurpose it in their own repositories (called forking), and make it available to others to do the same.
Collaborative aspects of GitHub include pull requests that allow developers to submit and accept updates to repos that build on and grow an open-source project. There are wikis, gists (code snippet sharing) and issue tracking for bugs, feature requests, or general questions and answers.
GitHub provides free code hosting for all public repos. Upgrade offerings include personal and organizational plans based on the number of private repos. For organizations that want a self-hosted GitHub development environment, GitHub Enterprise, used by the likes of CFPB, allows for self-hosted, private repos behind a firewall.
GitHub’s core user interface can be unwelcoming or even intimidating to the nondeveloper, but GitHub’s Pages package offers Web-hosting features that include domain mapping and lightweight content management tools such as static site generator Jekyll and text editor Atom.
Notable government projects that use Pages are the White House’s Project Open Data, 18F’s /Developer Program, CFPB’s Open Tech website and New York’s Open Data Handbook. Indeed, Wired recently commented that the White House’s open-data GitHub efforts “could help fix government.”…
See also: GitHub for Government (GovLab)
Liberating Data to Transform Health Care
Erika G. Martin, Natalie Helbig, and Nirav R. Shah on New York’s Open Data Experience in JAMA: “The health community relies on governmental survey, surveillance, and administrative data to track epidemiologic trends, identify risk factors, and study the health care delivery system. Since 2009, a quiet “open data” revolution has occurred. Catalyzed by President Obama’s open government directive, federal, state, and local governments are releasing deidentified data meeting 4 “open” criteria: public accessibility, availability in multiple formats, free of charge, and unlimited use and distribution rights.1 As of February 2014, HealthData.gov, the federal health data repository, has more than 1000 data sets, and Health Data NY, New York’s health data site, has 48 data sets with supporting charts and maps. Data range from health interview surveys to administrative transactions. The implicit logic is that making governmental data readily available will improve government transparency; increase opportunities for research, mobile health application development, and data-driven quality improvement; and make health-related information more accessible. Together, these activities have the potential to improve health care quality, reduce costs, facilitate population health planning and monitoring, and empower health care consumers to make better choices and live healthier lives.”