David Weidner at the Wall Street Journal: “For years investors have largely depended on three sources to distill the relentless onslaught of information about public companies: the companies themselves, Wall Street analysts and the media.
Each of these has their strengths, but they may have even bigger weaknesses. Companies spin. Analysts have conflicts of interest. The financial media is under deadline pressure and ill-equipped to act as a catch-all watchdog.
But in recent years, the tech whizzes out of Silicon Valley have been trying to democratize the markets. In 2010 I wrote about an effort called Moxy Vote, an online system for shareholders to cast ballots in proxy contests. Moxy Vote had some initial success but ran into regulatory trouble and failed to gain traction.
Some newer efforts are more promising, mostly because they depend on users, or some form of crowdsourcing, for their content. Crowdsourcing is when a need is turned over to a large group, usually an online community, rather than traditional paid employees or outside providers….
Estimize.com is one. It was founded in 2011 by former trader Leigh Drogan, but recently has undergone some significant expansion, adding a crowd-sourced prediction for mergers and acquisitions. Estimize also boasts a track record. It claims it beats Wall Street analysts 65.9% of the time during earnings season. Like SeekingAlpha, Estimize does, however, lean heavily on pros or semi-pros. Nearly 5,000 of its contributors are analysts.
Closer to the social networking world there’s scutify.com, a website and mobile app that aggregates what’s being said about individual stocks on social networks, blogs and other sources. It highlights trending stocks and links to chatter on social networks. (The site is owned by Cody Willard, a contributor to MarketWatch, which is owned by Dow Jones, the publisher of The Wall Street Journal.)
Perhaps the most intriguing startup is TwoMargins.com. The site allows investors, analysts, average Joes — anyone, really — to annotate company releases. In that way, Two Margins potentially can tap the power of the crowd to provide a fourth source for the marketplace.
Two Margins, a startup funded by Bloomberg L.P.’s venture capital fund, borrows annotation technology that’s already in use on other sites such as genius.com and scrible.com. Participants can sign in with their Twitter or Facebook accounts and post to those networks from the site. (Dow Jones competes with Bloomberg in the provision of news and financial data.)
At this moment, Two Margins isn’t a game changer. Founders Gniewko Lubecki and Akash Kapur said the site is in a pre-beta phase, which is to say it’s sort of up and running and being constantly tweaked.
Right now there’s nothing close to the critical mass needed for an exhaustive look at company filings. There’s just a handful of users and less than a dozen company releases and filings available.
Still, in the first moments after Twitter Inc.’s earnings were released Tuesday, Two Margins’ most loyal users began to scour the release. “Looks like Twitter is getting significantly better at monetizing users,” wrote a user named “George” who had annotated the revenue line from the company’s financial statement. Another user, “Scott Paster,” noted Twitter’s stock option grants to executives were nearly as high as its reported loss.
“The sum is greater than it’s parts when you pull together a community of users,” Mr. Kapur said. “Widening access to these documents is one goal. The other goal is broadening the pool of knowledge that’s brought to bear on these documents.”
In the end, this new wave of tech-driven services may never capture enough users to make it into the investing mainstream. They all struggle with uninformed and inaccurate content especially if they gain critical mass. Vetting is a problem.
For that reasons, it’s hard to predict whether these new entries will flourish or even survive. That’s not a bad thing. The march of technology will either improve on the idea or come up with a new one.
Ultimately, technology is making possible what hasn’t been. That is, free discussion, access and analysis of information. Some may see it as a threat to Wall Street, which has always charged for expert analysis. Really, though, these efforts are good for markets, which pride themselves on being fair and transparent.
It’s not just companies that should compete, but ideas too.”
This Exercise App Tracks Trends on How We Move In Different Cities
Mark Byrnes at CityLab: “An app designed to encourage exercise can also tell us a lot about the way different cities get from point A to B.
The app, called Human, runs in the background of your iPhone, automatically detecting activities like walking, cycling, running, and motorized transport. The point is to encourage you to exercise for at least 30 minutes a day.
Almost a year since Human launched (last August), its developers have released stunning visualization of all that movement: 7.5 million miles traveled by their app users so far.
On their site, you can look into the mobility data inside 30 different cities. Once you click on one, you’ll be greeted with a pie chart that shows the distribution of activity within that city lined up against a pie chart that shows the international average.
In the case of Amsterdam, its transportation clichés are verified. App users in the bike-loving city use two wheels way more than they use four. And they walk about as much as anywhere else:

Human then shows the paths traveled by their users. When it comes to Amsterdam, the results look almost exactly like the city’s entire street grid, no matter what physical activity is being shown:




How to harness the wisdom of crowds to improve public service delivery and policymaking
Eddie Copeland in PolicyBytes: “…In summary, government has used technology to streamline transactions and better understand the public’s opinions. Yet it has failed to use it to radically change the way it works. Have public services been reinvented? Is government smaller and leaner? Have citizens, businesses and civic groups been offered the chance to take part in the work of government and improve their own communities? On all counts the answer is unequivocally, no. What is needed, therefore, is a means to enable citizens to provide data to government to inform policymaking and to improve – or even help deliver – public services. What is needed is a Government Data Marketplace.
Government Data Marketplace
A Government Data Marketplace (GDM) would be a website that brought together public sector bodies that needed data, with individuals, businesses and other organisations that could provide it. Imagine an open data portal in reverse: instead of government publishing its own datasets to be used by citizens and businesses, it would instead publish its data needs and invite citizens, businesses or community groups to provide that data (for free or in return for payment). Just as open data portals aim to provide datasets in standard, machine-readable formats, GDM would operate according to strict open standards, and provide a consistent and automated way to deliver data to government through APIs.
How would it work? Imagine a local council that wished to know where instances of graffiti occurred within its borough. The council would create an account on GDM and publish a new request, outlining the data it required (not dissimilar to someone posting a job on a site like Freelancer). Citizens, businesses and other organisations would be able to view that request on GDM and bid to offer the service. For example, an app-development company could offer to build an app that would enable citizens to photograph and locate instances of graffiti in the borough. The app would be able to upload the data to GDM. The council could connect its own IT system to GDM to pass the data to their own database.
Importantly, the app-development company would specify via GDM how much it would charge to provide the data. Other companies and organisations could offer competing bids for delivering the same – or an even better service – at different prices. Supportive local civic hacker groups could even offer to provide the data for free. Either way, the council would get the data it needed without having to collect it for itself, whilst also ensuring it paid the best price from a number of competing providers.
Since GDM would be a public marketplace, other local authorities would be able to see that a particular company had designed a graffiti-reporting solution for one council, and could ask for the same data to be collected in their own boroughs. This would be quick and easy for the developer, as instead of having to create a bespoke solution to work with each council’s IT system, they could connect to all of them using one common interface via GDM. That would good for the company, as they could sell to a much larger market (the same solution would work for one council or all), and good for the councils, as they would benefit from cheaper prices generated from economies of scale. And since GDM would use open standards, if a council was unhappy with the data provided by one supplier, it could simply look to another company to provide the same information.
What would be the advantages of such a system? Firstly, innovation. GDM would free government from having to worry about what software it needed, and instead allow it to focus on the data it required to provide a service. To be clear: councils themselves do not need a graffiti app – they need data on where graffiti is. By focusing attention on its data needs, the public sector could let the market innovate to find the best solutions for providing it. That might be via an app, perhaps via a website, social media, or Internet of Things sensors, or maybe even using a completely new service that collected information in a radically different way. It will not matter – the right information would be provided in a common format via GDM.
Secondly, the potential cost savings of this approach would be many and considerable. At the very least, by creating a marketplace, the public sector would be able to source data at a competitive price. If several public sector bodies needed the same service via GDM, companies providing that data would be able to offer much cheaper prices for all, as instead of having to deal with hundreds of different organisations (and different interfaces) they could create one solution that worked for all of them. As prices became cheaper for standard solutions, this would in turn encourage more public sector bodies to converge on common ways of working, driving down costs still further. Yet these savings would be dwarfed by those possible if GDM could be used to source data that public sectors bodies currently have to manually collect themselves. Imagine if instead of having teams of inspectors to locate instances X, Y or Z, it could instead source the same data from citizens via GDM?
There would no limit to the potential applications to which GDM could be put by central and local government and other public sector bodies: for graffiti, traffic levels, environmental issues, education or welfare. It could be used to crowdsource facts, figures, images, map coordinates, text – anything that can be collected as data. Government could request information on areas on which it previously had none, helping them to assign their finite resources and money in a much more targeted way. New York City’s Mayor’s Office of Data Analytics has demonstrated that up to 500% increases in the efficiency of providing some public services can be achieved, if only the right data is available.
For the private sector, GDM would stimulate the growth of innovative new companies offering community data, and make it easier for them to sell data solutions across the whole of the public sector. They could pioneer in new data methods, and potentially even take over the provision of entire services which the public sector currently has to provide itself. For citizens, it would offer a means to genuinely get involved in solving issues that matter to their local communities, either by using apps made by businesses, or working to provide the data themselves.
And what about the benefits for policymaking? It is important to acknowledge that the idea of harnessing the wisdom of crowds for policymaking is currently experimental. In the case of Policy Futures Markets, some applications have also been considered to be highly controversial. So which methods would be most effective? What would they look like? In what policy domains would they provide most value? The simple fact is that we do not know. What is certain, however, is that innovation in open policymaking and crowdsourcing ideas will never be achieved until a platform is available that allows such ideas to be tried and tested. GDM could be that platform.
Public sector bodies could experiment with asking citizens for information or answers to particular, fact-based questions, or even for predictions on future outcomes, to help inform their policymaking activities. The market could then innovate to develop solutions to source that data from citizens, using the many different models for harnessing the wisdom of crowds. The effectiveness of those initiatives could then be judged, and the techniques honed. In the worst case scenario that it did not work, money would not have been wasted on building the wrong platform – GDM would continue to have value in providing data for public service needs as described above….”
Crowdsourcing Ideas to Accelerate Economic Growth and Prosperity through a Strategy for American Innovation
White House Blog: “America’s future economic growth and international competitiveness depend crucially on our capacity to innovate. Creating the jobs and industries of the future will require making the right investments to unleash the unmatched creativity and imagination of the American people.
We want to gather bold ideas for how we as a nation can build on and extend into the future our historic strengths in innovation and discovery. Today we are calling on thinkers, doers, and entrepreneurs across the country to submit their proposals for promising new initiatives or pressing needs for renewed investment to be included in next year’s updated Strategy for American Innovation.
What will the next Strategy for American Innovation accomplish? In part, it’s up to you. Your input will help guide the Administration’s efforts to catalyze the transformative innovation in products, processes, and services that is the hallmark of American ingenuity.
Today, we released a set of questions for your comment, which you can access here and on Quora – an online platform that allows us to crowdsource ideas from the American people.
Among the questions we are posing today to innovators across the country are:
- What specific policies or initiatives should the Administration consider prioritizing in the next version of the Strategy for American Innovation?
- What are the biggest challenges to, and opportunities for, innovation in the United States that will generate long-term economic growth and rising standards of living for more Americans?
- What additional opportunities exist to develop high-impact platform technologies that reduce the time and cost associated with the “design, build, test” cycle for important classes of materials, products, and systems?
- What investments, strategies, or technological advancements, across both the public and private sectors, are needed to rebuild the U.S. “industrial commons” (i.e., regional manufacturing capabilities) and ensure the latest technologies can be produced here?
- What partnerships or novel models for collaboration between the Federal Government and regions should the Administration consider in order to promote innovation and the development of regional innovation ecosystems?
In today’s world of rapidly evolving technology, the Administration is adapting its approach to innovation-driven economic growth to reflect the emergence of new and exciting possibilities. Now is the time to gather input from the American people in order to envision and shape the innovations of the future. The full Request for Information can be found here and the 2011 Strategy for American Innovation can be found here. Comments are due by September 23, 2014, and can be sent to [email protected]. We look forward to hearing your ideas!”
Indonesian techies crowdsource election results
The Indonesian techies, who work for multinational companies, were spurred into action after both presidential candidates claimed victory and accused each other of trying to rig the convoluted counting process, raising fears that the country’s young democracy was under threat.
Mr Najib and two friends took advantage of the decision by the national election commission (KPU) to upload the individual results from Indonesia’s 480,000 polling stations to its website for the first time, in an attempt to counter widespread fears about electoral fraud.
The three Indonesians scraped the voting data from the KPU website on to a database and then recruited 700 friends and acquaintances through Facebook to type in the results and check them. They uploaded the data to a website called kawalpemilu.org, which means “guard the election” in Indonesian.
Throughout the process, Mr Najib said he had to fend off hacking attacks, forcing him to shift data storage to a cloud-based service. The whole exercise cost $10 for a domain name and $0.10 for the data storage….”
Brief survey of crowdsourcing for data mining
Paper by Guo Xintong, Wang Hongzhi, Yangqiu Song, and Gao Hong in Expert Systems with Applications: “Crowdsourcing allows large-scale and flexible invocation of human input for data gathering and analysis, which introduces a new paradigm of data mining process. Traditional data mining methods often require the experts in analytic domains to annotate the data. However, it is expensive and usually takes a long time. Crowdsourcing enables the use of heterogeneous background knowledge from volunteers and distributes the annotation process to small portions of efforts from different contributions. This paper reviews the state-of-the-arts on the crowdsourcing for data mining in recent years. We first review the challenges and opportunities of data mining tasks using crowdsourcing, and summarize the framework of them. Then we highlight several exemplar works in each component of the framework, including question designing, data mining and quality control. Finally, we conclude the limitation of crowdsourcing for data mining and suggest related areas for future research.
Incentivizing Peer Review
Jeffrey Marlow in Wired on “The Last Obstacle for Open Access Science: The Galapagos Islands’ Charles Darwin Foundation runs on an annual operating budget of about $3.5 million. With this money, the center conducts conservation research, enacts species-saving interventions, and provides educational resources about the fragile island ecosystems. As a science-based enterprise whose work would benefit greatly from the latest research findings on ecological management, evolution, and invasive species, there’s one glaring hole in the Foundation’s budget: the $800,000 it would cost per year for subscriptions to leading academic journals.
According to Richard Price, founder and CEO of Academia.edu, this episode is symptomatic of a larger problem. “A lot of research centers” – NGOs, academic institutions in the developing world – “are just out in the cold as far as access to top journals is concerned,” says Price. “Research is being commoditized, and it’s just another aspect of the digital divide between the haves and have-nots.”
Academia.edu is a key player in the movement toward open access scientific publishing, with over 11 million participants who have uploaded nearly 3 million scientific papers to the site. It’s easy to understand Price’s frustration with the current model, in which academics donate their time to review articles, pay for the right to publish articles, and pay for access to articles. According to Price, journals charge an average of $4000 per article: $1500 for production costs (reformatting, designing), $1500 to orchestrate peer review (labor costs for hiring editors, administrators), and $1000 of profit.
“If there were no legacy in the scientific publishing industry, and we were looking at the best way to disseminate and view scientific results,” proposes Price, “things would look very different. Our vision is to build a complete replacement for scientific publishing,” one that would allow budget-constrained organizations like the CDF full access to information that directly impacts their work.
But getting to a sustainable new world order requires a thorough overhaul of academic publishing industry. The alternative vision – of “open science” – has two key properties: the uninhibited sharing of research findings, and a new peer review system that incorporates the best of the scientific community’s feedback. Several groups have made progress on the former, but the latter has proven particularly difficult given the current incentive structure. The currency of scientific research is the number of papers you’ve published and their citation counts – the number of times other researchers have referred to your work in their own publications. The emphasis is on creation of new knowledge – a worthy goal, to be sure – but substantial contributions to the quality, packaging, and contextualization of that knowledge in the form of peer review goes largely unrecognized. As a result, researchers view their role as reviewers as a chore, a time-consuming task required to sustain the ecosystem of research dissemination.
“Several experiments in this space have tried to incorporate online comment systems,” explains Price, “and the result is that putting a comment box online and expecting high quality comments to flood in is just unrealistic. My preference is to come up with a system where you’re just as motivated to share your feedback on a paper as you are to share your own findings.” In order to make this lofty aim a reality, reviewers’ contributions would need to be recognized. “You need something more nuanced, and more qualitative,” says Price. “For example, maybe you gather reputation points from your community online.” Translating such metrics into tangible benefits up the food chain – hirings, tenure decisions, awards – is a broader community shift that will no doubt take time.
A more iterative peer review process could allow the community to better police faulty methods by crowdsourcing their evaluation. “90% of scientific studies are not reproducible,” claims Price; a problem that is exacerbated by the strong bias toward positive results. Journals may be unlikely to publish methodological refutations, but a flurry of well-supported comments attached to a paper online could convince the researchers to marshal more convincing evidence. Typically, this sort of feedback cycle takes years….”
Crowdsourcing Parking Lot Occupancy using a Mobile Phone Application
Paper by Davami, Erfan; Sukthankar, Gita available at ASE@360: “Participatory sensing is a specialized form of crowdsourcing for mobile devices in which the users act as sensors to report on local environmental conditions. • This poster describes the process of prototyping a mobile phone crowdsourcing app for monitoring parking availability on a large university campus. • We present a case study of how an agent-based urban model can be used to perform a sensitivity analysis of the comparative susceptibility of different data fusion paradigms to potentially troublesome user behaviors: 1. Poor user enrollment, 2. Infrequent usage, 3. A preponderance of untrustworthy users.”
Selected Readings on Crowdsourcing Expertise
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Crowdsourcing enables leaders and citizens to work together to solve public problems in new and innovative ways. New tools and platforms enable citizens with differing levels of knowledge, expertise, experience and abilities to collaborate and solve problems together. Identifying experts, or individuals with specialized skills, knowledge or abilities with regard to a specific topic, and incentivizing their participation in crowdsourcing information, knowledge or experience to achieve a shared goal can enhance the efficiency and effectiveness of problem solving.
Selected Reading List (in alphabetical order)
- Katy Börner, Michael Conlon, Jon Corson-Rikert, and Ying Ding — VIVO: A Semantic Approach to Scholarly Networking and Discovery — an introduction to VIVO, a tool for representing information about researchers’ expertise and organizational relationships.
- Alessandro Bozzon, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci — Choosing the Right Crowd: Expert Finding in Social Networks — a paper exploring the challenge of identifying the expertise needed for a given problem through the use of social networks.
- Daren C. Brabham — The Myth of Amateur Crowds — a paper arguing that, contrary to popular belief, experts are more prevalent in crowdsourcing projects than hobbyists and amateurs.
- William H. Dutton — Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government — a paper arguing for more structured and well-managed crowdsourcing efforts within government to help harness the distributed expertise of citizens.
- Gagan Goel, Afshin Nikzad and Adish Singla – Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets – a paper exploring the intelligent tasking of Mechanical Turk workers based on varying levels of expertise.
- D. Gubanov, N. Korgin, D. Novikov and A. Kalkov – E-Expertise: Modern Collective Intelligence – an ebook focusing on the organizations and mechanisms of expert decisionmaking.
- Cathrine Holst – Expertise and Democracy – a collection of papers on the role of knowledge and expertise in modern democracies.
- Andrew King and Karim R. Lakhani — Using Open Innovation to Identify the Best Ideas — a paper examining different methods for opening innovation and tapping the “ideas cloud” of external expertise.
- Chengjiang Long, Gang Hua and Ashish Kapoor – Active Visual Recognition with Expertise Estimation in Crowdsourcing – a paper proposing a mechanism for identifying experts in a Mechanical Turk project.
- Beth Simone Noveck — “Peer to Patent”: Collective Intelligence, Open Review, and Patent Reform — a law review article introducing the idea of crowdsourcing expertise to mitigate the challenge of patent processing.
- Josiah Ober — Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment — a paper discussing the Relevant Expertise Aggregation (REA) model for improving democratic decision-making.
- Max H. Sims, Jeffrey Bigham, Henry Kautz and Marc W. Halterman – Crowdsourcing medical expertise in near real time – a paper describing the development of a mobile application to give healthcare providers with better access to expertise.
- Alessandro Spina – Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies – a paper proposing increased crowdsourcing of expertise within the European Food Safety Authority.
Annotated Selected Reading List (in alphabetical order)
Börner, Katy, Michael Conlon, Jon Corson-Rikert, and Ying Ding. “VIVO: A Semantic Approach to Scholarly Networking and Discovery.” Synthesis Lectures on the Semantic Web: Theory and Technology 2, no. 1 (October 17, 2012): 1–178. http://bit.ly/17huggT.
- This e-book “provides an introduction to VIVO…a tool for representing information about research and researchers — their scholarly works, research interests, and organizational relationships.”
- VIVO is a response to the fact that, “Information for scholars — and about scholarly activity — has not kept pace with the increasing demands and expectations. Information remains siloed in legacy systems and behind various access controls that must be licensed or otherwise negotiated before access. Information representation is in its infancy. The raw material of scholarship — the data and information regarding previous work — is not available in common formats with common semantics.”
- Providing access to structured information on the work and experience of a diversity of scholars enables improved expert finding — “identifying and engaging experts whose scholarly works is of value to one’s own. To find experts, one needs rich data regarding one’s own work and the work of potential related experts. The authors argue that expert finding is of increasing importance since, “[m]ulti-disciplinary and inter-disciplinary investigation is increasingly required to address complex problems.
Bozzon, Alessandro, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci. “Choosing the Right Crowd: Expert Finding in Social Networks.” In Proceedings of the 16th International Conference on Extending Database Technology, 637–648. EDBT ’13. New York, NY, USA: ACM, 2013. http://bit.ly/18QbtY5.
- This paper explores the challenge of selecting experts within the population of social networks by considering the following problem: “given an expertise need (expressed for instance as a natural language query) and a set of social network members, who are the most knowledgeable people for addressing that need?”
- The authors come to the following conclusions:
- “profile information is generally less effective than information about resources that they directly create, own or annotate;
- resources which are produced by others (resources appearing on the person’s Facebook wall or produced by people that she follows on Twitter) help increasing the assessment precision;
- Twitter appears the most effective social network for expertise matching, as it very frequently outperforms all other social networks (either combined or alone);
- Twitter appears as well very effective for matching expertise in domains such as computer engineering, science, sport, and technology & games, but Facebook is also very effective in fields such as locations, music, sport, and movies & tv;
- surprisingly, LinkedIn appears less effective than other social networks in all domains (including computer science) and overall.”
Brabham, Daren C. “The Myth of Amateur Crowds.” Information, Communication & Society 15, no. 3 (2012): 394–410. http://bit.ly/1hdnGJV.
- Unlike most of the related literature, this paper focuses on bringing attention to the expertise already being tapped by crowdsourcing efforts rather than determining ways to identify more dormant expertise to improve the results of crowdsourcing.
- Brabham comes to two central conclusions: “(1) crowdsourcing is discussed in the popular press as a process driven by amateurs and hobbyists, yet empirical research on crowdsourcing indicates that crowds are largely self-selected professionals and experts who opt-in to crowdsourcing arrangements; and (2) the myth of the amateur in crowdsourcing ventures works to label crowds as mere hobbyists who see crowdsourcing ventures as opportunities for creative expression, as entertainment, or as opportunities to pass the time when bored. This amateur/hobbyist label then undermines the fact that large amounts of real work and expert knowledge are exerted by crowds for relatively little reward and to serve the profit motives of companies.
Dutton, William H. Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government. One of a Series of Occasional Papers in Science and Technology Policy, Science and Technology Policy Institute, Institute for Defense Analyses, February 23, 2011. http://bit.ly/1c1bpEB.
- In this paper, a case is made for more structured and well-managed crowdsourcing efforts within government. Specifically, the paper “explains how collaborative networking can be used to harness the distributed expertise of citizens, as distinguished from citizen consultation, which seeks to engage citizens — each on an equal footing.” Instead of looking for answers from an undefined crowd, Dutton proposes “networking the public as advisors” by seeking to “involve experts on particular public issues and problems distributed anywhere in the world.”
- Dutton argues that expert-based crowdsourcing can be successfully for government for a number of reasons:
- Direct communication with a diversity of independent experts
- The convening power of government
- Compatibility with open government and open innovation
- Synergy with citizen consultation
- Building on experience with paid consultants
- Speed and urgency
- Centrality of documents to policy and practice.
- He also proposes a nine-step process for government to foster bottom-up collaboration networks:
- Do not reinvent the technology
- Focus on activities, not the tools
- Start small, but capable of scaling up
- Modularize
- Be open and flexible in finding and going to communities of experts
- Do not concentrate on one approach to all problems
- Cultivate the bottom-up development of multiple projects
- Experience networking and collaborating — be a networked individual
- Capture, reward, and publicize success.
Goel, Gagan, Afshin Nikzad and Adish Singla. “Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.” Under review by the International World Wide Web Conference (WWW). 2014. http://bit.ly/1qHBkdf
- Combining the notions of crowdsourcing expertise and crowdsourcing tasks, this paper focuses on the challenge within platforms like Mechanical Turk related to intelligently matching tasks to workers.
- The authors’ call for more strategic assignment of tasks in crowdsourcing markets is based on the understanding that “each worker has certain expertise and interests which define the set of tasks she can and is willing to do.”
- Focusing on developing meaningful incentives based on varying levels of expertise, the authors sought to create a mechanism that, “i) is incentive compatible in the sense that it is truthful for agents to report their true cost, ii) picks a set of workers and assigns them to the tasks they are eligible for in order to maximize the utility of the requester, iii) makes sure total payments made to the workers doesn’t exceed the budget of the requester.
Gubanov, D., N. Korgin, D. Novikov and A. Kalkov. E-Expertise: Modern Collective Intelligence. Springer, Studies in Computational Intelligence 558, 2014. http://bit.ly/U1sxX7
- In this book, the authors focus on “organization and mechanisms of expert decision-making support using modern information and communication technologies, as well as information analysis and collective intelligence technologies (electronic expertise or simply e-expertise).”
- The book, which “addresses a wide range of readers interested in management, decision-making and expert activity in political, economic, social and industrial spheres, is broken into five chapters:
- Chapter 1 (E-Expertise) discusses the role of e-expertise in decision-making processes. The procedures of e-expertise are classified, their benefits and shortcomings are identified, and the efficiency conditions are considered.
- Chapter 2 (Expert Technologies and Principles) provides a comprehensive overview of modern expert technologies. A special emphasis is placed on the specifics of e-expertise. Moreover, the authors study the feasibility and reasonability of employing well-known methods and approaches in e-expertise.
- Chapter 3 (E-Expertise: Organization and Technologies) describes some examples of up-to-date technologies to perform e-expertise.
- Chapter 4 (Trust Networks and Competence Networks) deals with the problems of expert finding and grouping by information and communication technologies.
- Chapter 5 (Active Expertise) treats the problem of expertise stability against any strategic manipulation by experts or coordinators pursuing individual goals.
Holst, Cathrine. “Expertise and Democracy.” ARENA Report No 1/14, Center for European Studies, University of Oslo. http://bit.ly/1nm3rh4
- This report contains a set of 16 papers focused on the concept of “epistocracy,” meaning the “rule of knowers.” The papers inquire into the role of knowledge and expertise in modern democracies and especially in the European Union (EU). Major themes are: expert-rule and democratic legitimacy; the role of knowledge and expertise in EU governance; and the European Commission’s use of expertise.
- Expert-rule and democratic legitimacy
- Papers within this theme concentrate on issues such as the “implications of modern democracies’ knowledge and expertise dependence for political and democratic theory.” Topics include the accountability of experts, the legitimacy of expert arrangements within democracies, the role of evidence in policy-making, how expertise can be problematic in democratic contexts, and “ethical expertise” and its place in epistemic democracies.
- The role of knowledge and expertise in EU governance
- Papers within this theme concentrate on “general trends and developments in the EU with regard to the role of expertise and experts in political decision-making, the implications for the EU’s democratic legitimacy, and analytical strategies for studying expertise and democratic legitimacy in an EU context.”
- The European Commission’s use of expertise
- Papers within this theme concentrate on how the European Commission uses expertise and in particular the European Commission’s “expertgroup system.” Topics include the European Citizen’s Initiative, analytic-deliberative processes in EU food safety, the operation of EU environmental agencies, and the autonomy of various EU agencies.
- Expert-rule and democratic legitimacy
King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. http://bit.ly/HjVOpi.
- In this paper, King and Lakhani examine different methods for opening innovation, where, “[i]nstead of doing everything in-house, companies can tap into the ideas cloud of external expertise to develop new products and services.”
- The three types of open innovation discussed are: opening the idea-creation process, competitions where prizes are offered and designers bid with possible solutions; opening the idea-selection process, ‘approval contests’ in which outsiders vote to determine which entries should be pursued; and opening both idea generation and selection, an option used especially by organizations focused on quickly changing needs.
Long, Chengjiang, Gang Hua and Ashish Kapoor. “Active Visual Recognition with Expertise Estimation in Crowdsourcing.” 2013 IEEE International Conference on Computer Vision. December 2013. http://bit.ly/1lRWFur.
- This paper is focused on improving the crowdsourced labeling of visual datasets from platforms like Mechanical Turk. The authors note that, “Although it is cheap to obtain large quantity of labels through crowdsourcing, it has been well known that the collected labels could be very noisy. So it is desirable to model the expertise level of the labelers to ensure the quality of the labels. The higher the expertise level a labeler is at, the lower the label noises he/she will produce.”
- Based on the need for identifying expert labelers upfront, the authors developed an “active classifier learning system which determines which users to label which unlabeled examples” from collected visual datasets.
- The researchers’ experiments in identifying expert visual dataset labelers led to findings demonstrating that the “active selection” of expert labelers is beneficial in cutting through the noise of crowdsourcing platforms.
Noveck, Beth Simone. “’Peer to Patent’: Collective Intelligence, Open Review, and Patent Reform.” Harvard Journal of Law & Technology 20, no. 1 (Fall 2006): 123–162. http://bit.ly/HegzTT.
- This law review article introduces the idea of crowdsourcing expertise to mitigate the challenge of patent processing. Noveck argues that, “access to information is the crux of the patent quality problem. Patent examiners currently make decisions about the grant of a patent that will shape an industry for a twenty-year period on the basis of a limited subset of available information. Examiners may neither consult the public, talk to experts, nor, in many cases, even use the Internet.”
- Peer-to-Patent, which launched three years after this article, is based on the idea that, “The new generation of social software might not only make it easier to find friends but also to find expertise that can be applied to legal and policy decision-making. This way, we can improve upon the Constitutional promise to promote the progress of science and the useful arts in our democracy by ensuring that only worth ideas receive that ‘odious monopoly’ of which Thomas Jefferson complained.”
Ober, Josiah. “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment.” American Political Science Review 107, no. 01 (2013): 104–122. http://bit.ly/1cgf857.
- In this paper, Ober argues that, “A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world.”
- Ober describes an approach to decision-making that aggregates expertise across multiple domains. This “Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest.”
Sims, Max H., Jeffrey Bigham, Henry Kautz and Marc W. Halterman. “Crowdsourcing medical expertise in near real time.” Journal of Hospital Medicine 9, no. 7, July 2014. http://bit.ly/1kAKvq7.
- In this article, the authors discuss the develoment of a mobile application called DocCHIRP, which was developed due to the fact that, “although the Internet creates unprecedented access to information, gaps in the medical literature and inefficient searches often leave healthcare providers’ questions unanswered.”
- The DocCHIRP pilot project used a “system of point-to-multipoint push notifications designed to help providers problem solve by crowdsourcing from their peers.”
- Healthcare providers (HCPs) sought to gain intelligence from the crowd, which included 85 registered users, on questions related to medication, complex medical decision making, standard of care, administrative, testing and referrals.
- The authors believe that, “if future iterations of the mobile crowdsourcing applications can address…adoption barriers and support the organic growth of the crowd of HCPs,” then “the approach could have a positive and transformative effect on how providers acquire relevant knowledge and care for patients.”
Spina, Alessandro. “Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies.” in Foundations of EU Food Law and Policy, eds. A. Alemmano and S. Gabbi. Ashgate, 2014. http://bit.ly/1k2EwdD.
- In this paper, Spina “presents some reflections on how the collaborative and crowdsourcing practices of Open Government could be integrated in the activities of EFSA [European Food Safety Authority] and other EU agencies,” with a particular focus on “highlighting the benefits of the Open Government paradigm for expert regulatory bodies in the EU.”
- Spina argues that the “crowdsourcing of expertise and the reconfiguration of the information flows between European agencies and teh public could represent a concrete possibility of modernising the role of agencies with a new model that has a low financial burden and an almost immediate effect on the legal governance of agencies.”
- He concludes that, “It is becoming evident that in order to guarantee that the best scientific expertise is provided to EU institutions and citizens, EFSA should strive to use the best organisational models to source science and expertise.”
Is Crowdsourcing the Future for Legislation?
Brian Heaton in GovTech: “…While drafting legislation is traditionally the job of elected officials, an increasing number of lawmakers are using digital platforms such as Wikispaces and GitHub to give constituents a bigger hand in molding the laws they’ll be governed by. The practice has been used this year in both California and New York City, and shows no signs of slowing down anytime soon, experts say.
Trond Undheim, crowdsourcing expert and founder of Yegii Inc., a startup company that provides and ranks advanced knowledge assets in the areas of health care, technology, energy and finance, said crowdsourcing was “certainly viable” as a tool to help legislators understand what constituents are most passionate about.
“I’m a big believer in asking a wide variety of people the same question and crowdsourcing has become known as the long-tail of answers,” Undheim said. “People you wouldn’t necessarily think of have something useful to say.”
California Assemblyman Mike Gatto, D-Los Angeles, agreed. He’s spearheaded an effort this year to let residents craft legislation regarding probate law — a measure designed to allow a court to assign a guardian to a deceased person’s pet. Gatto used the online Wikispaces platform — which allows for Wikipedia-style editing and content contribution — to let anyone with an Internet connection collaborate on the legislation over a period of several months.
The topic of the bill may not have been headline news, but Gatto was encouraged by the media attention his experiment received. As a result, he’s committed to running another crowdsourced bill next year — just on a bigger, more mainstream public issue.
New York City Council Member Ben Kallos has a plethora of technology-related legislation being considered in the Big Apple. Many of the bills are open for public comment and editing on GitHub. In an interview with Government Technology last month, Kallos said he believes using crowdsourcing to comment on and edit legislation is empowering and creates a different sense of democracy where people can put forward their ideas.
County governments also are joining the crowdsourcing trend. The Catawba Regional Council of Governments in South Carolina and the Centralia Council of Governments in North Carolina are gathering opinions on how county leaders should plan for future growth in the region.
At a public forum earlier this year, attendees were given iPads to go online and review four growth options and record their views on which they preferred. The priorities outlined by citizens will be taken back to decision-makers in each of the counties to see how well existing plans match up with what the public wants.
Gatto said he’s encouraged by how quickly the crowdsourcing of policy has spread throughout the U.S. He said there’s a disconnect between governments and their constituencies who believe elected officials don’t listen. But that could change as crowdsourcing continues to make its impact on lawmakers.
“When you put out a call like I did and others have done and say ‘I’m going to let the public draft a law and whatever you draft, I’m committed to introducing it … I think that’s a powerful message,” Gatto said. “I think the public appreciates it because it makes them understand that the government still belongs to them.”
Protecting the Process
Despite the benefits crowdsourcing brings to the legislative process, there remain some question marks about whether it truly provides insight into the public’s feelings on an issue. For example, because many political issues are driven by the influence of special interest groups, what’s preventing those groups from manipulating the bill-drafting process?
Not much, according to Undheim. He cautioned policymakers to be aware of the motivations from people taking part in crowdsourcing efforts to write and edit laws. Gatto shared Undheim’s concerns, but noted that the platform he used for developing his probate law – Wikispaces – has safeguards in place so that a member of his staff can revert language of a crowdsourced bill back to a previous version if it’s determined that someone was trying to unduly influence the drafting process….”