Paper by Gianluigi Viscusi, Marco Castelli and Carlo Batini in Future Internet Journal: “Open data initiatives are characterized, in several countries, by a great extension of the number of data sets made available for access by public administrations, constituencies, businesses and other actors, such as journalists, international institutions and academics, to mention a few. However, most of the open data sets rely on selection criteria, based on a technology-driven perspective, rather than a focus on the potential public and social value of data to be published. Several experiences and reports confirm this issue, such as those of the Open Data Census. However, there are also relevant best practices. The goal of this paper is to investigate the different dimensions of a framework suitable to support public administrations, as well as constituencies, in assessing and benchmarking the social value of open data initiatives. The framework is tested on three initiatives, referring to three different countries, Italy, the United Kingdom and Tunisia. The countries have been selected to provide a focus on European and Mediterranean countries, considering also the difference in legal frameworks (civic law vs. common law countries)”
Cell Phone Guide For US Protesters, Updated 2014 Edition
EFF: “With major protests in the news again, we decided it’s time to update our cell phone guide for protestors. A lot has changed since we last published this report in 2011, for better and for worse. On the one hand, we’ve learned more about the massive volume of law enforcement requests for cell phone—ranging from location information to actual content—and widespread use of dedicated cell phone surveillance technologies. On the other hand, strong Supreme Court opinions have eliminated any ambiguity about the unconstitutionality of warrantless searches of phones incident to arrest, and a growing national consensus says location data, too, is private.”
The Quiet Revolution: Open Data Is Transforming Citizen-Government Interaction
Maury Blackman at Wired: “The public’s trust in government is at an all-time low. This is not breaking news.
But what if I told you that just this past May, President Obama signed into law a bill that passed Congress with unanimous support. A bill that could fundamentally transform the way citizens interact with their government. This legislation could also create an entirely new, trillion-dollar industry right here in the U.S. It could even save lives.
On May 9th, the Digital Accountability and Transparency Act of 2014 (DATA Act) became law. There were very few headlines, no Rose Garden press conference.
I imagine most of you have never heard of the DATA Act. The bill with the nerdy name has the potential to revolutionize government. It requires federal agencies to make their spending data available in standardized, publicly accessible formats. Supporters of the legislation included Tea Partiers and the most liberal Democrats. But the bill is only scratches the surface of what’s possible.
So What’s the Big Deal?
On his first day in Office, President Obama signed a memorandum calling for a more open and transparent government. The President wrote, “Openness will strengthen our democracy and promote efficiency and effectiveness in Government.” This was followed by the creation of Data.gov, a one-stop shop for all government data. The site does not just include financial data, but also a wealth of other information related to education, public safety, climate and much more—all available in open and machine-readable format. This has helped fuel an international movement.
Tech minded citizens are building civic apps to bring government into the digital age; reporters are now more able to connect the dots easier, not to mention the billions of taxpayer dollars saved. And last year the President took us a step further. He signed an Executive Order making open government data the default option.
Cities and states have followed Washington’s lead with similar open data efforts on the local level. In San Francisco, the city’s Human Services Agency has partnered with Promptly; a text message notification service that alerts food stamp recipients (CalFresh) when they are at risk of being disenrolled from the program. This service is incredibly beneficial, because most do not realize any change in status, until they are in the grocery store checkout line, trying to buy food for their family.
Other products and services created using open data do more than just provide an added convenience—they actually have the potential to save lives. The PulsePoint mobile app sends text messages to citizens trained in CPR when someone in walking distance is experiencing a medical emergency that may require CPR. The app is currently available in almost 600 cities in 18 states, which is great. But shouldn’t a product this valuable be available to every city and state in the country?…”
Request for Proposals: Exploring the Implications of Government Release of Large Datasets
“The Berkeley Center for Law & Technology and Microsoft are issuing this request for proposals (RFP) to fund scholarly inquiry to examine the civil rights, human rights, security and privacy issues that arise from recent initiatives to release large datasets of government information to the public for analysis and reuse. This research may help ground public policy discussions and drive the development of a framework to avoid potential abuses of this data while encouraging greater engagement and innovation.
This RFP seeks to:
- Gain knowledge of the impact of the online release of large amounts of data generated by citizens’ interactions with government
- Imagine new possibilities for technical, legal, and regulatory interventions that avoid abuse
- Begin building a body of research that addresses these issues
– BACKGROUND –
Governments at all levels are releasing large datasets for analysis by anyone for any purpose—“Open Data.” Using Open Data, entrepreneurs may create new products and services, and citizens may use it to gain insight into the government. A plethora of time saving and other useful applications have emerged from Open Data feeds, including more accurate traffic information, real-time arrival of public transportation, and information about crimes in neighborhoods. Sometimes governments release large datasets in order to encourage the development of unimagined new applications. For instance, New York City has made over 1,100 databases available, some of which contain information that can be linked to individuals, such as a parking violation database containing license plate numbers and car descriptions.
Data held by the government is often implicitly or explicitly about individuals—acting in roles that have recognized constitutional protection, such as lobbyist, signatory to a petition, or donor to a political cause; in roles that require special protection, such as victim of, witness to, or suspect in a crime; in the role as businessperson submitting proprietary information to a regulator or obtaining a business license; and in the role of ordinary citizen. While open government is often presented as an unqualified good, sometimes Open Data can identify individuals or groups, leading to a more transparent citizenry. The citizen who foresees this growing transparency may be less willing to engage in government, as these transactions may be documented and released in a dataset to anyone to use for any imaginable purpose—including to deanonymize the database—forever. Moreover, some groups of citizens may have few options or no choice as to whether to engage in governmental activities. Hence, open data sets may have a disparate impact on certain groups. The potential impact of large-scale data and analysis on civil rights is an area of growing concern. A number of civil rights and media justice groups banded together in February 2014 to endorse the “Civil Rights Principles for the Era of Big Data” and the potential of new data systems to undermine longstanding civil rights protections was flagged as a “central finding” of a recent policy review by White House adviser John Podesta.
The Berkeley Center for Law & Technology (BCLT) and Microsoft are issuing this request for proposals in an effort to better understand the implications and potential impact of the release of data related to U.S. citizens’ interactions with their local, state and federal governments. BCLT and Microsoft will fund up to six grants, with a combined total of $300,000. Grantees will be required to participate in a workshop to present and discuss their research at the Berkeley Technology Law Journal (BTLJ) Spring Symposium. All grantees’ papers will be published in a dedicated monograph. Grantees’ papers that approach the issues from a legal perspective may also be published in the BTLJ. We may also hold a followup workshop in New York City or Washington, DC.
While we are primarily interested in funding proposals that address issues related to the policy impacts of Open Data, many of these issues are intertwined with general societal implications of “big data.” As a result, proposals that explore Open Data from a big data perspective are welcome; however, proposals solely focused on big data are not. We are open to proposals that address the following difficult question. We are also open to methods and disciplines, and are particularly interested in proposals from cross-disciplinary teams.
- To what extent does existing Open Data made available by city and state governments affect individual profiling? Do the effects change depending on the level of aggregation (neighborhood vs. cities)? What releases of information could foreseeably cause discrimination in the future? Will different groups in society be disproportionately impacted by Open Data?
- Should the use of Open Data be governed by a code of conduct or subject to a review process before being released? In order to enhance citizen privacy, should governments develop guidelines to release sampled or perturbed data, instead of entire datasets? When datasets contain potentially identifiable information, should there be a notice-and-comment proceeding that includes proposed technological solutions to anonymize, de-identify or otherwise perturb the data?
- Is there something fundamentally different about government services and the government’s collection of citizen’s data for basic needs in modern society such as power and water that requires governments to exercise greater due care than commercial entities?
- Companies have legal and practical mechanisms to shield data submitted to government from public release. What mechanisms do individuals have or should have to address misuse of Open Data? Could developments in the constitutional right to information policy as articulated in Whalen and Westinghouse Electric Co address Open Data privacy issues?
- Collecting data costs money, and its release could affect civil liberties. Yet it is being given away freely, sometimes to immensely profitable firms. Should governments license data for a fee and/or impose limits on its use, given its value?
- The privacy principle of “collection limitation” is under siege, with many arguing that use restrictions will be more efficacious for protecting privacy and more workable for big data analysis. Does the potential of Open Data justify eroding state and federal privacy act collection limitation principles? What are the ethical dimensions of a government system that deprives the data subject of the ability to obscure or prevent the collection of data about a sensitive issue? A move from collection restrictions to use regulation raises a number of related issues, detailed below.
- Are use restrictions efficacious in creating accountability? Consumer reporting agencies are regulated by use restrictions, yet they are not known for their accountability. How could use regulations be implemented in the context of Open Data efficaciously? Can a self-learning algorithm honor data use restrictions?
- If an Open Dataset were regulated by a use restriction, how could individuals police wrongful uses? How would plaintiffs overcome the likely defenses or proof of facts in a use regulation system, such as a burden to prove that data were analyzed and the product of that analysis was used in a certain way to harm the plaintiff? Will plaintiffs ever be able to beat first amendment defenses?
- The President’s Council of Advisors on Science and Technology big data report emphasizes that analysis is not a “use” of data. Such an interpretation suggests that NSA metadata analysis and large-scale scanning of communications do not raise privacy issues. What are the ethical and legal implications of the “analysis is not use” argument in the context of Open Data?
- Open Data celebrates the idea that information collected by the government can be used by another person for various kinds of analysis. When analysts are not involved in the collection of data, they are less likely to understand its context and limitations. How do we ensure that this knowledge is maintained in a use regulation system?
- Former President William Clinton was admitted under a pseudonym for a procedure at a New York Hospital in 2004. The hospital detected 1,500 attempts by its own employees to access the President’s records. With snooping such a tempting activity, how could incentives be crafted to cause self-policing of government data and the self-disclosure of inappropriate uses of Open Data?
- It is clear that data privacy regulation could hamper some big data efforts. However, many examples of big data successes hail from highly regulated environments, such as health care and financial services—areas with statutory, common law, and IRB protections. What are the contours of privacy law that are compatible with big data and Open Data success and which are inherently inimical to it?
- In recent years, the problem of “too much money in politics” has been addressed with increasing disclosure requirements. Yet, distrust in government remains high, and individuals identified in donor databases have been subjected to harassment. Is the answer to problems of distrust in government even more Open Data?
- What are the ethical and epistemological implications of encouraging government decision-making based upon correlation analysis, without a rigorous understanding of cause and effect? Are there decisions that should not be left to just correlational proof? While enthusiasm for data science has increased, scientific journals are elevating their standards, with special scrutiny focused on hypothesis-free, multiple comparison analysis. What could legal and policy experts learn from experts in statistics about the nature and limits of open data?…
To submit a proposal, visit the Conference Management Toolkit (CMT) here.
Once you have created a profile, the site will allow you to submit your proposal.
If you have questions, please contact Chris Hoofnagle, principal investigator on this project.”
When Technologies Combine, Amazing Innovation Happens
FastCoexist: “Innovation occurs both within fields, and in combinations of fields. It’s perhaps the latter that ends up being most groundbreaking. When people of disparate expertise, mindset and ideas work together, new possibilities pop up.
In a new report, the Institute for the Future argues that “technological change is increasingly driven by the combination and recombination of foundational elements.” So, when we think about the future, we need to consider not just fundamental advances (say, in computing, materials, bioscience) but also at the intersection of these technologies.
The report uses combination-analysis in the form of a map. IFTF selects 13 “territories”–what it calls “frontiers of innovation”–and then examines the linkages and overlaps. The result is 20 “combinational forecasts.” “These are the big stories, hot spots that will shape the landscape of technology in the coming decade,” the report explains. “Each combinatorial forecast emerges from the intersection of multiple territories.”…
Quantified Experiences
Advances in brain-imaging techniques will make bring new transparency to our thoughts and feelings. “Assigning precise measurements to feelings like pain through neurofeedback and other techniques could allow for comparison, modulation, and manipulation of these feelings,” the report says. “Direct measurement of our once-private thoughts and feelings can help us understand other people’s experience but will also present challenges regarding privacy and definition of norms.”…
Code Is The Law
The law enforcement of the future may increasingly rely on sensors and programmable devices. “Governance is shifting from reliance on individual responsibility and human policing toward a system of embedded protocols and automatic rule enforcement,” the report says. That in turn means greater power for programmers who are effectively laying down the parameters of the new relationship between government and governed….”
The Skeleton Crew
Book Review by Edward Jay Epstein in the Wall Street Journal: “…Even in an age when we are tracked electronically by our phone companies at every single moment, about 4,000 unidentified corpses turn up in the U.S. every year, of which about half have been murdered. In 2007 no fewer than 13,500 sets of unidentified human remains were languishing in the evidence rooms of medical examiners, according to an analysis published in the National Institute of Justice Journal.
In her brilliant book “The Skeleton Crew,” Deborah Halber explains why local law enforcement often fails to investigate such deaths:”Unidentified corpses are like obtuse, financially strapped houseguests: they turn up uninvited, take up space reserved for more obliging visitors, require care and attention, and then, when you are ready for them to move on, they don’t have anywhere to go.” The result is that many of these remains are consigned to oblivion.
While the population of the anonymous dead receives only scant attention from the police or the media, it has given rise to a macabre subculture of Internet sleuthing. Ms. Halber chronicles with lucidity and wit how amateur investigators troll websites, such as the Doe Network, Official Cold Case Investigations and Websleuths Crime Sleuthing Community, and check online databases looking for matches between the reported missing and the unidentified dead. It is a grisly pursuit involving linking the images of dead bodies to the descriptions posted by people trying to find someone.
Ms. Halber devotes most of “The Skeleton Crew” to describing a handful of cases that have given rise to this bizarre avocation….”
The Quiet Movement to Make Government Fail Less Often
in The New York Times: “If you wanted to bestow the grandiose title of “most successful organization in modern history,” you would struggle to find a more obviously worthy nominee than the federal government of the United States.
In its earliest stirrings, it established a lasting and influential democracy. Since then, it has helped defeat totalitarianism (more than once), established the world’s currency of choice, sent men to the moon, built the Internet, nurtured the world’s largest economy, financed medical research that saved millions of lives and welcomed eager immigrants from around the world.
Of course, most Americans don’t think of their government as particularly successful. Only 19 percent say they trust the government to do the right thing most of the time, according to Gallup. Some of this mistrust reflects a healthy skepticism that Americans have always had toward centralized authority. And the disappointing economic growth of recent decades has made Americans less enamored of nearly every national institution.
But much of the mistrust really does reflect the federal government’s frequent failures – and progressives in particular will need to grapple with these failures if they want to persuade Americans to support an active government.
When the federal government is good, it’s very, very good. When it’s bad (or at least deeply inefficient), it’s the norm.
The evidence is abundant. Of the 11 large programs for low- and moderate-income people that have been subject to rigorous, randomized evaluation, only one or two show strong evidence of improving most beneficiaries’ lives. “Less than 1 percent of government spending is backed by even the most basic evidence of cost-effectiveness,” writes Peter Schuck, a Yale law professor, in his new book, “Why Government Fails So Often,” a sweeping history of policy disappointments.
As Mr. Schuck puts it, “the government has largely ignored the ‘moneyball’ revolution in which private-sector decisions are increasingly based on hard data.”
And yet there is some good news in this area, too. The explosion of available data has made evaluating success – in the government and the private sector – easier and less expensive than it used to be. At the same time, a generation of data-savvy policy makers and researchers has entered government and begun pushing it to do better. They have built on earlier efforts by the Bush and Clinton administrations.
The result is a flowering of experiments to figure out what works and what doesn’t.
New York City, Salt Lake City, New York State and Massachusetts have all begun programs to link funding for programs to their success: The more effective they are, the more money they and their backers receive. The programs span child care, job training and juvenile recidivism.
The approach is known as “pay for success,” and it’s likely to spread to Cleveland, Denver and California soon. David Cameron’s conservative government in Britain is also using it. The Obama administration likes the idea, and two House members – Todd Young, an Indiana Republican, and John Delaney, a Maryland Democrat – have introduced a modest bill to pay for a version known as “social impact bonds.”
The White House is also pushing for an expansion of randomized controlled trials to evaluate government programs. Such trials, Mr. Schuck notes, are “the gold standard” for any kind of evaluation. Using science as a model, researchers randomly select some people to enroll in a government program and others not to enroll. The researchers then study the outcomes of the two groups….”
Selected Readings on Crowdsourcing Expertise
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Crowdsourcing enables leaders and citizens to work together to solve public problems in new and innovative ways. New tools and platforms enable citizens with differing levels of knowledge, expertise, experience and abilities to collaborate and solve problems together. Identifying experts, or individuals with specialized skills, knowledge or abilities with regard to a specific topic, and incentivizing their participation in crowdsourcing information, knowledge or experience to achieve a shared goal can enhance the efficiency and effectiveness of problem solving.
Selected Reading List (in alphabetical order)
- Katy Börner, Michael Conlon, Jon Corson-Rikert, and Ying Ding — VIVO: A Semantic Approach to Scholarly Networking and Discovery — an introduction to VIVO, a tool for representing information about researchers’ expertise and organizational relationships.
- Alessandro Bozzon, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci — Choosing the Right Crowd: Expert Finding in Social Networks — a paper exploring the challenge of identifying the expertise needed for a given problem through the use of social networks.
- Daren C. Brabham — The Myth of Amateur Crowds — a paper arguing that, contrary to popular belief, experts are more prevalent in crowdsourcing projects than hobbyists and amateurs.
- William H. Dutton — Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government — a paper arguing for more structured and well-managed crowdsourcing efforts within government to help harness the distributed expertise of citizens.
- Gagan Goel, Afshin Nikzad and Adish Singla – Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets – a paper exploring the intelligent tasking of Mechanical Turk workers based on varying levels of expertise.
- D. Gubanov, N. Korgin, D. Novikov and A. Kalkov – E-Expertise: Modern Collective Intelligence – an ebook focusing on the organizations and mechanisms of expert decisionmaking.
- Cathrine Holst – Expertise and Democracy – a collection of papers on the role of knowledge and expertise in modern democracies.
- Andrew King and Karim R. Lakhani — Using Open Innovation to Identify the Best Ideas — a paper examining different methods for opening innovation and tapping the “ideas cloud” of external expertise.
- Chengjiang Long, Gang Hua and Ashish Kapoor – Active Visual Recognition with Expertise Estimation in Crowdsourcing – a paper proposing a mechanism for identifying experts in a Mechanical Turk project.
- Beth Simone Noveck — “Peer to Patent”: Collective Intelligence, Open Review, and Patent Reform — a law review article introducing the idea of crowdsourcing expertise to mitigate the challenge of patent processing.
- Josiah Ober — Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment — a paper discussing the Relevant Expertise Aggregation (REA) model for improving democratic decision-making.
- Max H. Sims, Jeffrey Bigham, Henry Kautz and Marc W. Halterman – Crowdsourcing medical expertise in near real time – a paper describing the development of a mobile application to give healthcare providers with better access to expertise.
- Alessandro Spina – Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies – a paper proposing increased crowdsourcing of expertise within the European Food Safety Authority.
Annotated Selected Reading List (in alphabetical order)
Börner, Katy, Michael Conlon, Jon Corson-Rikert, and Ying Ding. “VIVO: A Semantic Approach to Scholarly Networking and Discovery.” Synthesis Lectures on the Semantic Web: Theory and Technology 2, no. 1 (October 17, 2012): 1–178. http://bit.ly/17huggT.
- This e-book “provides an introduction to VIVO…a tool for representing information about research and researchers — their scholarly works, research interests, and organizational relationships.”
- VIVO is a response to the fact that, “Information for scholars — and about scholarly activity — has not kept pace with the increasing demands and expectations. Information remains siloed in legacy systems and behind various access controls that must be licensed or otherwise negotiated before access. Information representation is in its infancy. The raw material of scholarship — the data and information regarding previous work — is not available in common formats with common semantics.”
- Providing access to structured information on the work and experience of a diversity of scholars enables improved expert finding — “identifying and engaging experts whose scholarly works is of value to one’s own. To find experts, one needs rich data regarding one’s own work and the work of potential related experts. The authors argue that expert finding is of increasing importance since, “[m]ulti-disciplinary and inter-disciplinary investigation is increasingly required to address complex problems.
Bozzon, Alessandro, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci. “Choosing the Right Crowd: Expert Finding in Social Networks.” In Proceedings of the 16th International Conference on Extending Database Technology, 637–648. EDBT ’13. New York, NY, USA: ACM, 2013. http://bit.ly/18QbtY5.
- This paper explores the challenge of selecting experts within the population of social networks by considering the following problem: “given an expertise need (expressed for instance as a natural language query) and a set of social network members, who are the most knowledgeable people for addressing that need?”
- The authors come to the following conclusions:
- “profile information is generally less effective than information about resources that they directly create, own or annotate;
- resources which are produced by others (resources appearing on the person’s Facebook wall or produced by people that she follows on Twitter) help increasing the assessment precision;
- Twitter appears the most effective social network for expertise matching, as it very frequently outperforms all other social networks (either combined or alone);
- Twitter appears as well very effective for matching expertise in domains such as computer engineering, science, sport, and technology & games, but Facebook is also very effective in fields such as locations, music, sport, and movies & tv;
- surprisingly, LinkedIn appears less effective than other social networks in all domains (including computer science) and overall.”
Brabham, Daren C. “The Myth of Amateur Crowds.” Information, Communication & Society 15, no. 3 (2012): 394–410. http://bit.ly/1hdnGJV.
- Unlike most of the related literature, this paper focuses on bringing attention to the expertise already being tapped by crowdsourcing efforts rather than determining ways to identify more dormant expertise to improve the results of crowdsourcing.
- Brabham comes to two central conclusions: “(1) crowdsourcing is discussed in the popular press as a process driven by amateurs and hobbyists, yet empirical research on crowdsourcing indicates that crowds are largely self-selected professionals and experts who opt-in to crowdsourcing arrangements; and (2) the myth of the amateur in crowdsourcing ventures works to label crowds as mere hobbyists who see crowdsourcing ventures as opportunities for creative expression, as entertainment, or as opportunities to pass the time when bored. This amateur/hobbyist label then undermines the fact that large amounts of real work and expert knowledge are exerted by crowds for relatively little reward and to serve the profit motives of companies.
Dutton, William H. Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government. One of a Series of Occasional Papers in Science and Technology Policy, Science and Technology Policy Institute, Institute for Defense Analyses, February 23, 2011. http://bit.ly/1c1bpEB.
- In this paper, a case is made for more structured and well-managed crowdsourcing efforts within government. Specifically, the paper “explains how collaborative networking can be used to harness the distributed expertise of citizens, as distinguished from citizen consultation, which seeks to engage citizens — each on an equal footing.” Instead of looking for answers from an undefined crowd, Dutton proposes “networking the public as advisors” by seeking to “involve experts on particular public issues and problems distributed anywhere in the world.”
- Dutton argues that expert-based crowdsourcing can be successfully for government for a number of reasons:
- Direct communication with a diversity of independent experts
- The convening power of government
- Compatibility with open government and open innovation
- Synergy with citizen consultation
- Building on experience with paid consultants
- Speed and urgency
- Centrality of documents to policy and practice.
- He also proposes a nine-step process for government to foster bottom-up collaboration networks:
- Do not reinvent the technology
- Focus on activities, not the tools
- Start small, but capable of scaling up
- Modularize
- Be open and flexible in finding and going to communities of experts
- Do not concentrate on one approach to all problems
- Cultivate the bottom-up development of multiple projects
- Experience networking and collaborating — be a networked individual
- Capture, reward, and publicize success.
Goel, Gagan, Afshin Nikzad and Adish Singla. “Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.” Under review by the International World Wide Web Conference (WWW). 2014. http://bit.ly/1qHBkdf
- Combining the notions of crowdsourcing expertise and crowdsourcing tasks, this paper focuses on the challenge within platforms like Mechanical Turk related to intelligently matching tasks to workers.
- The authors’ call for more strategic assignment of tasks in crowdsourcing markets is based on the understanding that “each worker has certain expertise and interests which define the set of tasks she can and is willing to do.”
- Focusing on developing meaningful incentives based on varying levels of expertise, the authors sought to create a mechanism that, “i) is incentive compatible in the sense that it is truthful for agents to report their true cost, ii) picks a set of workers and assigns them to the tasks they are eligible for in order to maximize the utility of the requester, iii) makes sure total payments made to the workers doesn’t exceed the budget of the requester.
Gubanov, D., N. Korgin, D. Novikov and A. Kalkov. E-Expertise: Modern Collective Intelligence. Springer, Studies in Computational Intelligence 558, 2014. http://bit.ly/U1sxX7
- In this book, the authors focus on “organization and mechanisms of expert decision-making support using modern information and communication technologies, as well as information analysis and collective intelligence technologies (electronic expertise or simply e-expertise).”
- The book, which “addresses a wide range of readers interested in management, decision-making and expert activity in political, economic, social and industrial spheres, is broken into five chapters:
- Chapter 1 (E-Expertise) discusses the role of e-expertise in decision-making processes. The procedures of e-expertise are classified, their benefits and shortcomings are identified, and the efficiency conditions are considered.
- Chapter 2 (Expert Technologies and Principles) provides a comprehensive overview of modern expert technologies. A special emphasis is placed on the specifics of e-expertise. Moreover, the authors study the feasibility and reasonability of employing well-known methods and approaches in e-expertise.
- Chapter 3 (E-Expertise: Organization and Technologies) describes some examples of up-to-date technologies to perform e-expertise.
- Chapter 4 (Trust Networks and Competence Networks) deals with the problems of expert finding and grouping by information and communication technologies.
- Chapter 5 (Active Expertise) treats the problem of expertise stability against any strategic manipulation by experts or coordinators pursuing individual goals.
Holst, Cathrine. “Expertise and Democracy.” ARENA Report No 1/14, Center for European Studies, University of Oslo. http://bit.ly/1nm3rh4
- This report contains a set of 16 papers focused on the concept of “epistocracy,” meaning the “rule of knowers.” The papers inquire into the role of knowledge and expertise in modern democracies and especially in the European Union (EU). Major themes are: expert-rule and democratic legitimacy; the role of knowledge and expertise in EU governance; and the European Commission’s use of expertise.
- Expert-rule and democratic legitimacy
- Papers within this theme concentrate on issues such as the “implications of modern democracies’ knowledge and expertise dependence for political and democratic theory.” Topics include the accountability of experts, the legitimacy of expert arrangements within democracies, the role of evidence in policy-making, how expertise can be problematic in democratic contexts, and “ethical expertise” and its place in epistemic democracies.
- The role of knowledge and expertise in EU governance
- Papers within this theme concentrate on “general trends and developments in the EU with regard to the role of expertise and experts in political decision-making, the implications for the EU’s democratic legitimacy, and analytical strategies for studying expertise and democratic legitimacy in an EU context.”
- The European Commission’s use of expertise
- Papers within this theme concentrate on how the European Commission uses expertise and in particular the European Commission’s “expertgroup system.” Topics include the European Citizen’s Initiative, analytic-deliberative processes in EU food safety, the operation of EU environmental agencies, and the autonomy of various EU agencies.
- Expert-rule and democratic legitimacy
King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. http://bit.ly/HjVOpi.
- In this paper, King and Lakhani examine different methods for opening innovation, where, “[i]nstead of doing everything in-house, companies can tap into the ideas cloud of external expertise to develop new products and services.”
- The three types of open innovation discussed are: opening the idea-creation process, competitions where prizes are offered and designers bid with possible solutions; opening the idea-selection process, ‘approval contests’ in which outsiders vote to determine which entries should be pursued; and opening both idea generation and selection, an option used especially by organizations focused on quickly changing needs.
Long, Chengjiang, Gang Hua and Ashish Kapoor. “Active Visual Recognition with Expertise Estimation in Crowdsourcing.” 2013 IEEE International Conference on Computer Vision. December 2013. http://bit.ly/1lRWFur.
- This paper is focused on improving the crowdsourced labeling of visual datasets from platforms like Mechanical Turk. The authors note that, “Although it is cheap to obtain large quantity of labels through crowdsourcing, it has been well known that the collected labels could be very noisy. So it is desirable to model the expertise level of the labelers to ensure the quality of the labels. The higher the expertise level a labeler is at, the lower the label noises he/she will produce.”
- Based on the need for identifying expert labelers upfront, the authors developed an “active classifier learning system which determines which users to label which unlabeled examples” from collected visual datasets.
- The researchers’ experiments in identifying expert visual dataset labelers led to findings demonstrating that the “active selection” of expert labelers is beneficial in cutting through the noise of crowdsourcing platforms.
Noveck, Beth Simone. “’Peer to Patent’: Collective Intelligence, Open Review, and Patent Reform.” Harvard Journal of Law & Technology 20, no. 1 (Fall 2006): 123–162. http://bit.ly/HegzTT.
- This law review article introduces the idea of crowdsourcing expertise to mitigate the challenge of patent processing. Noveck argues that, “access to information is the crux of the patent quality problem. Patent examiners currently make decisions about the grant of a patent that will shape an industry for a twenty-year period on the basis of a limited subset of available information. Examiners may neither consult the public, talk to experts, nor, in many cases, even use the Internet.”
- Peer-to-Patent, which launched three years after this article, is based on the idea that, “The new generation of social software might not only make it easier to find friends but also to find expertise that can be applied to legal and policy decision-making. This way, we can improve upon the Constitutional promise to promote the progress of science and the useful arts in our democracy by ensuring that only worth ideas receive that ‘odious monopoly’ of which Thomas Jefferson complained.”
Ober, Josiah. “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment.” American Political Science Review 107, no. 01 (2013): 104–122. http://bit.ly/1cgf857.
- In this paper, Ober argues that, “A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world.”
- Ober describes an approach to decision-making that aggregates expertise across multiple domains. This “Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest.”
Sims, Max H., Jeffrey Bigham, Henry Kautz and Marc W. Halterman. “Crowdsourcing medical expertise in near real time.” Journal of Hospital Medicine 9, no. 7, July 2014. http://bit.ly/1kAKvq7.
- In this article, the authors discuss the develoment of a mobile application called DocCHIRP, which was developed due to the fact that, “although the Internet creates unprecedented access to information, gaps in the medical literature and inefficient searches often leave healthcare providers’ questions unanswered.”
- The DocCHIRP pilot project used a “system of point-to-multipoint push notifications designed to help providers problem solve by crowdsourcing from their peers.”
- Healthcare providers (HCPs) sought to gain intelligence from the crowd, which included 85 registered users, on questions related to medication, complex medical decision making, standard of care, administrative, testing and referrals.
- The authors believe that, “if future iterations of the mobile crowdsourcing applications can address…adoption barriers and support the organic growth of the crowd of HCPs,” then “the approach could have a positive and transformative effect on how providers acquire relevant knowledge and care for patients.”
Spina, Alessandro. “Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies.” in Foundations of EU Food Law and Policy, eds. A. Alemmano and S. Gabbi. Ashgate, 2014. http://bit.ly/1k2EwdD.
- In this paper, Spina “presents some reflections on how the collaborative and crowdsourcing practices of Open Government could be integrated in the activities of EFSA [European Food Safety Authority] and other EU agencies,” with a particular focus on “highlighting the benefits of the Open Government paradigm for expert regulatory bodies in the EU.”
- Spina argues that the “crowdsourcing of expertise and the reconfiguration of the information flows between European agencies and teh public could represent a concrete possibility of modernising the role of agencies with a new model that has a low financial burden and an almost immediate effect on the legal governance of agencies.”
- He concludes that, “It is becoming evident that in order to guarantee that the best scientific expertise is provided to EU institutions and citizens, EFSA should strive to use the best organisational models to source science and expertise.”
Accessible Law for the Internet Age
- Careful organization by article and section makes browsing a breeze.
- A site-wide search allows you to find the laws you’re looking for by topic.
- Scroll-over definitions translate legal jargon into common English.
- Downloadable legal code lets you take the law into your own hands.
- Best of all, everything on the site remains cost-and restriction-free.”
Is Crowdsourcing the Future for Legislation?
Brian Heaton in GovTech: “…While drafting legislation is traditionally the job of elected officials, an increasing number of lawmakers are using digital platforms such as Wikispaces and GitHub to give constituents a bigger hand in molding the laws they’ll be governed by. The practice has been used this year in both California and New York City, and shows no signs of slowing down anytime soon, experts say.
Trond Undheim, crowdsourcing expert and founder of Yegii Inc., a startup company that provides and ranks advanced knowledge assets in the areas of health care, technology, energy and finance, said crowdsourcing was “certainly viable” as a tool to help legislators understand what constituents are most passionate about.
“I’m a big believer in asking a wide variety of people the same question and crowdsourcing has become known as the long-tail of answers,” Undheim said. “People you wouldn’t necessarily think of have something useful to say.”
California Assemblyman Mike Gatto, D-Los Angeles, agreed. He’s spearheaded an effort this year to let residents craft legislation regarding probate law — a measure designed to allow a court to assign a guardian to a deceased person’s pet. Gatto used the online Wikispaces platform — which allows for Wikipedia-style editing and content contribution — to let anyone with an Internet connection collaborate on the legislation over a period of several months.
The topic of the bill may not have been headline news, but Gatto was encouraged by the media attention his experiment received. As a result, he’s committed to running another crowdsourced bill next year — just on a bigger, more mainstream public issue.
New York City Council Member Ben Kallos has a plethora of technology-related legislation being considered in the Big Apple. Many of the bills are open for public comment and editing on GitHub. In an interview with Government Technology last month, Kallos said he believes using crowdsourcing to comment on and edit legislation is empowering and creates a different sense of democracy where people can put forward their ideas.
County governments also are joining the crowdsourcing trend. The Catawba Regional Council of Governments in South Carolina and the Centralia Council of Governments in North Carolina are gathering opinions on how county leaders should plan for future growth in the region.
At a public forum earlier this year, attendees were given iPads to go online and review four growth options and record their views on which they preferred. The priorities outlined by citizens will be taken back to decision-makers in each of the counties to see how well existing plans match up with what the public wants.
Gatto said he’s encouraged by how quickly the crowdsourcing of policy has spread throughout the U.S. He said there’s a disconnect between governments and their constituencies who believe elected officials don’t listen. But that could change as crowdsourcing continues to make its impact on lawmakers.
“When you put out a call like I did and others have done and say ‘I’m going to let the public draft a law and whatever you draft, I’m committed to introducing it … I think that’s a powerful message,” Gatto said. “I think the public appreciates it because it makes them understand that the government still belongs to them.”
Protecting the Process
Despite the benefits crowdsourcing brings to the legislative process, there remain some question marks about whether it truly provides insight into the public’s feelings on an issue. For example, because many political issues are driven by the influence of special interest groups, what’s preventing those groups from manipulating the bill-drafting process?
Not much, according to Undheim. He cautioned policymakers to be aware of the motivations from people taking part in crowdsourcing efforts to write and edit laws. Gatto shared Undheim’s concerns, but noted that the platform he used for developing his probate law – Wikispaces – has safeguards in place so that a member of his staff can revert language of a crowdsourced bill back to a previous version if it’s determined that someone was trying to unduly influence the drafting process….”