TedMed: “Mathematician Max Little is launching a project that aims to literally give Parkinson’s disease (PD) patients a voice in their own diagnosis and help them monitor their disease progression.
Patients Voice Analysis (PVA) is an open science project that uses phone-based voice recordings and self-reported symptoms, along with software Little designed, to track disease progression. Little, a TEDMED 2013 speaker and TED Fellow, is partnering with the online community PatientsLikeMe, co-founded by TEDMED 2009 speaker James Heywood, and Sage Bionetworks, a non-profit research organization, to conduct the research.
The new project is an extension of Little’s Parkinson’s Voice Initiative, which used speech analysis algorithms to diagnose Parkinson’s from voice records with the help of 17,000 volunteers. This time, he seeks to not only detect markers of PD, but also to add information reported by patients using PatientsLikeMe’s Parkinson’s Disease Rating Scale (PDRS), a tool that documents patients’ answers to questions that measure treatment effectiveness and disease progression….
As openly shared information, the collected data has potential to help vast numbers of individuals by tapping into collective ingenuity. Little has long argued that for science to progress, researchers need to democratize research and move past jostling for credit. Sage Bionetworks has designed a platform called Synapse to allow data sharing with collaborative version control, an effort led by open data advocate John Wilbanks.
“If you can’t share your data, how can you reproduce your science? One of the big problems we’re facing with this kind of medical research is the data is not open and getting access to it is a nightmare,” Little says.
With the PVA project, “Basically anyone can log on download the anonymized data and play around with data mining techniques. We don’t really care what people are able to come up with. We just want the most accurate prediction we can get.
“In research, you’re almost always constrained by what you think is the best way to do things. Unless you open it to the community at large, you’ll never know,” he says.”
L’intelligence d’une ville : ses citoyens
Michel Dumais: “Tic toc! disions-nous. Bientôt la centième. Et avec la cent-unième, de nouveaux défis. Ville intelligente, disiez-vous? Je subodore le traditionnel appel de pied aux trois lettres et à une logique administrative archaïque. Et si on faisait plutôt appel à l’intelligence de ceux qui connaissent le plus leur ville, ses citoyens?
Pour régler un problème (et même à l’occasion, un «pas d’problème»), les administrations regardent du côté de ces logiciels mammouth qui, sur papier, sont censés faire tout, qui engloutissent des centaines de millions de dollars, mais qui, finalement, font les manchettes des médias parce qu’il faut y injecter encore plus d’argent. Et qui permettent aux TI d’asseoir encore plus leur contrôle sur une administration.
Bref, lorsque l’on parle de ville intelligente, plusieurs y voient le pactole. Ah! Reste que ce qui était «acceptable», hier, ne l’est plus aujourd’hui. Et que la réalisation d’une ville intelligente n’est surtout pas un défi technologique, loin de là.
LA QUESTION DU SANS-FIL
Il y a des années de cela, la simple logique eut voulu que la Ville cesse de penser «big telcos» afin de conclure rapidement une alliance avec l’organisme communautaire «Île sans fil» et ainsi favoriser le déploiement rapide sur l’île de la technologie sans fil.
Une telle alliance, un modèle dans le genre, existe.
Mais pas à Montréal. Plutôt à Québec, alors que la Ville et l’organisme communautaire «Zap Québec» travaillent main dans la main pour le plus grand bénéfice des citoyens de Québec et des touristes. Et à Montréal? On jase, on jase.
Donc, une ville intelligente. C’est une ville qui sait, à l’aide des technologies, comment harnacher ses infrastructures et les mettre au service de ses citoyens tout en réalisant des économies et en favorisant le développement durable.
C’est aussi une ville qui sait écouter et mobiliser ses citoyens, ses militants et ses entrepreneurs, tout en leur donnant des outils (comme des données utilisables) afin qu’ils puissent eux aussi créer des services destinés à leur organisation et à tous les citoyens de la ville. Sans compter que tous ces outils facilitent la prise de décisions chez les maires d’arrondissement et le comité exécutif.
Bref, une ville intelligente selon le professeur Rudolf Giffinger, c’est ça: «une économie intelligente, une mobilité intelligente, un environnement intelligent, des habitants intelligents, un mode de vie intelligent et, enfin, une administration intelligente».…
J’invite le lecteur à regarder LifeApps, une extraordinaire série télé diffusée sur le site de la chaîne AlJazeera. Le sujet: des jeunes et de moins jeunes militants, bidouilleurs, qui s’impliquent et créent des services pour leur communauté.”
Are bots taking over Wikipedia?
Kurzweil News: “As crowdsourced Wikipedia has grown too large — with more than 30 million articles in 287 languages — to be entirely edited and managed by volunteers, 12 Wikipedia bots have emerged to pick up the slack.
The bots use Wikidata — a free knowledge base that can be read and edited by both humans and bots — to exchange information between entries and between the 287 languages.
Which raises an interesting question: what portion of Wikipedia edits are generated by humans versus bots?
To find out (and keep track of other bot activity), Thomas Steiner of Google Germany has created an open-source application (and API): Wikipedia and Wikidata Realtime Edit Stats, described in an arXiv paper.
The percentages of bot vs. human edits as shown in the application is constantly changing. A KurzweilAI snapshot on Feb. 20 at 5:19 AM EST showed an astonishing 42% of Wikipedia being edited by bots. (The application lists the 12 bots.)
Anonymous vs. logged-In humans (credit: Thomas Steiner)
The percentages also vary by language. Only 5% of English edits were by bots; but for Serbian pages, in which few Wikipedians apparently participate, 96% of edits were by bots.
The application also tracks what percentage of edits are by anonymous users. Globally, it was 25 percent in our snapshot and a surprising 34 percent for English — raising interesting questions about corporate and other interests covertly manipulating Wikipedia information.
NatureNet: a model for crowdsourcing the design of citizen science systems
Paper in CSCW Companion ’14, the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing: “NatureNet is citizen science system designed for collecting bio-diversity data in nature park settings. Park visitors are encouraged to participate in the design of the system in addition to collecting bio-diversity data. Our goal is to increase the motivation to participate in citizen science via crowdsourcing: the hypothesis is that when the crowd plays a role in the design and development of the system, they become stakeholders in the project and work to ensure its success. This paper presents a model for crowdsourcing design and citizen science data collection, and the results from early trials with users that illustrate the potential of this approach.”
Crowdsourcing and regulatory reviews: A new way of challenging red tape in British government?
New paper by Martin Lodge and Kai Wegrich in Regulation and Governance: “Much has been said about the appeal of digital government devices to enhance consultation on rulemaking. This paper explores the most ambitious attempt by the UK central government so far to draw on “crowdsourcing” to consult and act on regulatory reform, the “Red Tape Challenge.” We find that the results of this exercise do not represent any major change to traditional challenges to consultation processes. Instead, we suggest that the extensive institutional arrangements for crowdsourcing were hardly significant in informing actual policy responses: neither the tone of the crowdsourced comments, the direction of the majority views, nor specific comments were seen to matter. Instead, it was processes within the executive that shaped the overall governmental responses to this initiative. The findings, therefore, provoke wider debates about the use of social media in rulemaking and consultation exercises.”
Open Data (Updated and Expanded)
As part of an ongoing effort to build a knowledge base for the field of opening governance by organizing and disseminating its learnings, the GovLab Selected Readings series provides an annotated and curated collection of recommended works on key opening governance topics. We start our series with a focus on Open Data. To suggest additional readings on this or any other topic, please email biblio@thegovlab.org.
Open data refers to data that is publicly available for anyone to use and which is licensed in a way that allows for its re-use. The common requirement that open data be machine-readable not only means that data is distributed via the Internet in a digitized form, but can also be processed by computers through automation, ensuring both wide dissemination and ease of re-use. Much of the focus of the open data advocacy community is on government data and government-supported research data. For example, in May 2013, the US Open Data Policy defined open data as publicly available data structured in a way that enables the data to be fully discoverable and usable by end users, and consistent with a number of principles focused on availability, accessibility and reusability.
Selected Reading List (in alphabetical order)
- Mark S. Fox – City Data: Big, Open and Linked – a paper exploring the concepts underlying Big City Data and potential for wider impact as more people are given the opportunity to analyze big and small data.
- Muriel Foulonneau, Sébastien Martin, and Slim Turki – How Open Data Are Turned into Services? – a book chapter proposing a means for evaluating the impact of open data initiatives, especially in regard to improving service delivery.
- Brett Goldstein and Lauren Dyson – Beyond Transparency: Open Data and the Future of Civic Innovation – a multi-authored book exploring the broad open data landscape from a variety of disciplinary perspectives.
- Karolis Granickas – Understanding the Impact of Releasing and Re-using Open Government Data – a paper exploring the open data research field with an eye toward enabling an environment that maximizes the benefits of open data.
- Joel Gurin – Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation – a book describing the realized and potential benefit, especially related its power to transform business, government, and society.
- Thorhildur Jetzek, Michel Avital, and Niels Bjørn-Andersen – Generating Value from Open Government Data – a paper proposing a new conceptual model for creating value (broadly understood) from open government data.
- Maxat Kassen – A promising phenomenon of open data: A case study of the Chicago open data project – a case study demonstrating the empowering potential of open data through the examination of Chicago’s open data efforts.
- Justin Keen, Radu Calinescu, Richard Paige and John Rooksby – Big data + politics = open data: The case of health care data in England – a paper exploring challenges and assumptions related to open datasets data, technological infrastructure and levels of access through the study of the U.K.’s National Health Service.
- Stefan Kulk and Bastiaan Van Loenen – Brave New Open Data World? – a paper examining the tensions between open data initiatives and European privacy regulations.
- Vivek Kundra – Digital Fuel of the 21st Century: Innovation through Open Data and the Network Effect – a paper describing the impacts to date of open data as well as recommendations for maximizing future impact.
- David G. Robinson, Harlan Yu, William P. Zeller, and Edward W. Felten – Government Data and the Invisible Hand – a paper focusing on the open data movement’s evolving impact on entrepreneurs.
- Barbara Ubaldi – Open Government Data: Towards Empirical Analysis of Open Government Data Initiatives – a report offering a framework for empirically evaluating open government data initiatives.
- Ben Worthy – David Cameron’s Transparency Revolution? The Impact of Open Data in the UK – a paper evaluating the U.K.’s open data efforts in relation to their effects on accountability, participation and better informing citizens.
- Anneke Zuiderwijk, Marijn Janssen, Sunil Choenni, Ronald Meijer and Roexsana Sheikh Alibaks – Socio-technical Impediments of Open Data – a paper describing the socio-technical challenges of opening data based on a review of the literature, workshops and interviews.
- Anneke Zuiderwijk and Marijn Janssen – Open Data Policies, Their Implementation and Impact: A Framework for Comparison – a paper proposing a comparison and evaluation framework for open government initiatives across governments levels.
Annotated Selected Reading List (in alphabetical order)
Fox, Mark S. “City Data: Big, Open and Linked.” Working Paper, Enterprise Integration Laboratory (2013). http://bit.ly/1bFr7oL.
- This paper examines concepts that underlie Big City Data using data from multiple cities as examples. It begins by explaining the concepts of Open, Unified, Linked, and Grounded data, which are central to the Semantic Web. Fox then explore Big Data as an extension of Data Analytics, and provide case examples of good data analytics in cities.
- Fox concludes that we can develop the tools that will enable anyone to analyze data, both big and small, by adopting the principles of the Semantic Web:
- Data being openly available over the internet,
- Data being unifiable using common vocabularies,
- Data being linkable using International Resource Identifiers,
- Data being accessible using a common data structure, namely triples,
- Data being semantically grounded using Ontologies.
Foulonneau, Muriel, Sébastien Martin, and Slim Turki. “How Open Data Are Turned into Services?” In Exploring Services Science, edited by Mehdi Snene and Michel Leonard, 31–39. Lecture Notes in Business Information Processing 169. Springer International Publishing, 2014. http://bit.ly/1fltUmR.
- In this chapter, the authors argue that, considering the important role the development of new services plays as a motivation for open data policies, the impact of new services created through open data should play a more central role in evaluating the success of open data initiatives.
- Foulonneau, Martin and Turki argue that the following metrics should be considered when evaluating the success of open data initiatives: “the usage, audience, and uniqueness of the services, according to the changes it has entailed in the public institutions that have open their data…the business opportunity it has created, the citizen perception of the city…the modification to particular markets it has entailed…the sustainability of the services created, or even the new dialog created with citizens.”
Goldstein, Brett, and Lauren Dyson. Beyond Transparency: Open Data and the Future of Civic Innovation. 1 edition. (Code for America Press: 2013). http://bit.ly/15OAxgF
- This “cross-disciplinary survey of the open data landscape” features stories from practitioners in the open data space — including Michael Flowers, Brett Goldstein, Emer Colmeman and many others — discussing what they’ve accomplished with open civic data. The book “seeks to move beyond the rhetoric of transparency for transparency’s sake and towards action and problem solving.”
- The book’s editors seek to accomplish the following objectives:
- Help local governments learn how to start an open data program
- Spark discussion on where open data will go next
- Help community members outside of government better engage with the process of governance
- Lend a voice to many aspects of the open data community.
- The book is broken into five sections: Opening Government Data, Building on Open Data, Understanding Open Data, Driving Decisions with Data and Looking Ahead.
Granickas, Karolis. “Understanding the Impact of Releasing and Re-using Open Government Data.” European Public Sector Information Platform, ePSIplatform Topic Report No. 2013/08, (2013). http://bit.ly/GU0Nx4.
- This paper examines the impact of open government data by exploring the latest research in the field, with an eye toward enabling an environment for open data, as well as identifying the benefits of open government data and its political, social, and economic impacts.
- Granickas concludes that to maximize the benefits of open government data: a) further research is required that structure and measure potential benefits of open government data; b) “government should pay more attention to creating feedback mechanisms between policy implementers, data providers and data-re-users”; c) “finding a balance between demand and supply requires mechanisms of shaping demand from data re-users and also demonstration of data inventory that governments possess”; and lastly, d) “open data policies require regular monitoring.”
Gurin, Joel. Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, (New York: McGraw-Hill, 2014). http://amzn.to/1flubWR.
- In this book, GovLab Senior Advisor and Open Data 500 director Joel Gurin explores the broad realized and potential benefit of Open Data, and how, “unlike Big Data, Open Data is transparent, accessible, and reusable in ways that give it the power to transform business, government, and society.”
- The book provides “an essential guide to understanding all kinds of open databases – business, government, science, technology, retail, social media, and more – and using those resources to your best advantage.”
- In particular, Gurin discusses a number of applications of Open Data with very real potential benefits:
- “Hot Startups: turn government data into profitable ventures;
- Savvy Marketing: understanding how reputational data drives your brand;
- Data-Driven Investing: apply new tools for business analysis;
- Consumer Information: connect with your customers using smart disclosure;
- Green Business: use data to bet on sustainable companies;
- Fast R&D: turn the online world into your research lab;
- New Opportunities: explore open fields for new businesses.”
Jetzek, Thorhildur, Michel Avital, and Niels Bjørn-Andersen. “Generating Value from Open Government Data.” Thirty Fourth International Conference on Information Systems, 5. General IS Topics 2013. http://bit.ly/1gCbQqL.
- In this paper, the authors “developed a conceptual model portraying how data as a resource can be transformed to value.”
- Jetzek, Avital and Bjørn-Andersen propose a conceptual model featuring four Enabling Factors (openness, resource governance, capabilities and technical connectivity) acting on four Value Generating Mechanisms (efficiency, innovation, transparency and participation) leading to the impacts of Economic and Social Value.
- The authors argue that their research supports that “all four of the identified mechanisms positively influence value, reflected in the level of education, health and wellbeing, as well as the monetary value of GDP and environmental factors.”
Kassen, Maxat. “A promising phenomenon of open data: A case study of the Chicago open data project.” Government Information Quarterly (2013). http://bit.ly/1ewIZnk.
- This paper uses the Chicago open data project to explore the “empowering potential of an open data phenomenon at the local level as a platform useful for promotion of civic engagement projects and provide a framework for future research and hypothesis testing.”
- Kassen argues that “open data-driven projects offer a new platform for proactive civic engagement” wherein governments can harness “the collective wisdom of the local communities, their knowledge and visions of the local challenges, governments could react and meet citizens’ needs in a more productive and cost-efficient manner.”
- The paper highlights the need for independent IT developers to network in order for this trend to continue, as well as the importance of the private sector in “overall diffusion of the open data concept.”
Keen, Justin, Radu Calinescu, Richard Paige, John Rooksby. “Big data + politics = open data: The case of health care data in England.” Policy and Internet 5 (2), (2013): 228–243. http://bit.ly/1i231WS.
- This paper examines the assumptions regarding open datasets, technological infrastructure and access, using healthcare systems as a case study.
- The authors specifically address two assumptions surrounding enthusiasm about Big Data in healthcare: the assumption that healthcare datasets and technological infrastructure are up to task, and the assumption of access to this data from outside the healthcare system.
- By using the National Health Service in England as an example, the authors identify data, technology, and information governance challenges. They argue that “public acceptability of third party access to detailed health care datasets is, at best, unclear,” and that the prospects of Open Data depend on Open Data policies, which are inherently political, and the government’s assertion of property rights over large datasets. Thus, they argue that the “success or failure of Open Data in the NHS may turn on the question of trust in institutions.”
Kulk, Stefan and Bastiaan Van Loenen. “Brave New Open Data World?” International Journal of Spatial Data Infrastructures Research, May 14, 2012. http://bit.ly/15OAUYR.
- This paper examines the evolving tension between the open data movement and the European Union’s privacy regulations, especially the Data Protection Directive.
- The authors argue, “Technological developments and the increasing amount of publicly available data are…blurring the lines between non-personal and personal data. Open data may not seem to be personal data on first glance especially when it is anonymised or aggregated. However, it may become personal by combining it with other publicly available data or when it is de-anonymised.”
Kundra, Vivek. “Digital Fuel of the 21st Century: Innovation through Open Data and the Network Effect.” Joan Shorenstein Center on the Press, Politics and Public Policy, Harvard College: Discussion Paper Series, January 2012, http://hvrd.me/1fIwsjR.
- In this paper, Vivek Kundra, the first Chief Information Officer of the United States, explores the growing impact of open data, and argues that, “In the information economy, data is power and we face a choice between democratizing it and holding on to it for an asymmetrical advantage.”
- Kundra offers four specific recommendations to maximize the impact of open data: Citizens and NGOs must demand open data in order to fight government corruption, improve accountability and government services; Governments must enact legislation to change the default setting of government to open, transparent and participatory; The press must harness the power of the network effect through strategic partnerships and crowdsourcing to cut costs and provide better insights; and Venture capitalists should invest in startups focused on building companies based on public sector data.
Noveck, Beth Simone and Daniel L. Goroff. “Information for Impact: Liberating Nonprofit Sector Data.” The Aspen Institute Philanthropy & Social Innovation Publication Number 13-004. 2013. http://bit.ly/WDxd7p.
- This report is focused on “obtaining better, more usable data about the nonprofit sector,” which encompasses, as of 2010, “1.5 million tax-exempt organizations in the United States with $1.51 trillion in revenues.”
- Toward that goal, the authors propose liberating data from the Form 990, an Internal Revenue Service form that “gathers and publishes a large amount of information about tax-exempt organizations,” including information related to “governance, investments, and other factors not directly related to an organization’s tax calculations or qualifications for tax exemption.”
- The authors recommend a two-track strategy: “Pursuing the longer-term goal of legislation that would mandate electronic filing to create open 990 data, and pursuing a shorter-term strategy of developing a third party platform that can demonstrate benefits more immediately.”
Robinson, David G., Harlan Yu, William P. Zeller, and Edward W. Felten, “Government Data and the Invisible Hand.” Yale Journal of Law & Technology 11 (2009), http://bit.ly/1c2aDLr.
- This paper proposes a new approach to online government data that “leverages both the American tradition of entrepreneurial self-reliance and the remarkable low-cost flexibility of contemporary digital technology.”
- “In order for public data to benefit from the same innovation and dynamism that characterize private parties’ use of the Internet, the federal government must reimagine its role as an information provider. Rather than struggling, as it currently does, to design sites that meet each end-user need, it should focus on creating a simple, reliable and publicly accessible infrastructure that ‘exposes’ the underlying data.”
- This working paper from the OECD seeks to provide an all-encompassing look at the principles, concepts and criteria framing open government data (OGD) initiatives.
- Ubaldi also analyzes a variety of challenges to implementing OGD initiatives, including policy, technical, economic and financial, organizational, cultural and legal impediments.
- The paper also proposes a methodological framework for evaluating OGD Initiatives in OECD countries, with the intention of eventually “developing a common set of metrics to consistently assess impact and value creation within and across countries.”
Worthy, Ben. “David Cameron’s Transparency Revolution? The Impact of Open Data in the UK.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, November 29, 2013. http://bit.ly/NIrN6y.
- In this article, Worthy “examines the impact of the UK Government’s Transparency agenda, focusing on the publication of spending data at local government level. It measures the democratic impact in terms of creating transparency and accountability, public participation and everyday information.”
- Worthy’s findings, based on surveys of local authorities, interviews and FOI requests, are disappointing. He finds that:
- Open spending data has led to some government accountability, but largely from those already monitoring government, not regular citizens.
- Open Data has not led to increased participation, “as it lacks the narrative or accountability instruments to fully bring such effects.”
- It has also not “created a new stream of information to underpin citizen choice, though new innovations offer this possibility. The evidence points to third party innovations as the key.
- Despite these initial findings, “Interviewees pointed out that Open Data holds tremendous opportunities for policy-making. Joined up data could significantly alter how policy is made and resources targeted. From small scale issues e.g. saving money through prescriptions to targeting homelessness or health resources, it can have a transformative impact. “
Zuiderwijk, Anneke, Marijn Janssen, Sunil Choenni, Ronald Meijer and Roexsana Sheikh Alibaks. “Socio-technical Impediments of Open Data.” Electronic Journal of e-Government 10, no. 2 (2012). http://bit.ly/17yf4pM.
- This paper to seeks to identify the socio-technical impediments to open data impact based on a review of the open data literature, as well as workshops and interviews.
- The authors discovered 118 impediments across ten categories: 1) availability and access; 2) find-ability; 3) usability; 4) understandability; 5) quality; 6) linking and combining data; 7) comparability and compatibility; 8) metadata; 9) interaction with the data provider; and 10) opening and uploading.
Zuiderwijk, Anneke and Marijn Janssen. “Open Data Policies, Their Implementation and Impact: A Framework for Comparison.” Government Information Quarterly 31, no. 1 (January 2014): 17–29. http://bit.ly/1bQVmYT.
- In this article, Zuiderwijk and Janssen argue that “currently there is a multiplicity of open data policies at various levels of government, whereas very little systematic and structured research [being] done on the issues that are covered by open data policies, their intent and actual impact.”
- With this evaluation deficit in mind, the authors propose a new framework for comparing open data policies at different government levels using the following elements for comparison:
- Policy environment and context, such as level of government organization and policy objectives;
- Policy content (input), such as types of data not publicized and technical standards;
- Performance indicators (output), such as benefits and risks of publicized data; and
- Public values (impact).
To stay current on recent writings and developments on Open Data, please subscribe to the GovLab Digest.
Did we miss anything? Please submit reading recommendations to biblio@thegovlab.org or in the comments below.
House Bill Raises Questions about Crowdsourcing
Anne Bowser for Commons Lab (Wilson Center):”A new bill in the House is raising some key questions about how crowdsourcing is understood by scientists, government agencies, policymakers and the public at large.
Robin Bravender’s recent article in Environment & Energy Daily, “House Republicans Push Crowdsourcing on Agency Science,” (subscription required) neatly summarizes the debate around H.R. 4012, a bill introduced to the House of Representatives earlier this month. The House Science, Space and Technology Committe earlier this week held a hearing on the bill, which could see a committee vote as early as next month.
Dubbed the “Secret Science Reform Act of 2014,” the bill prohibits the Environmental Protection Agency (EPA) from “proposing, finalizing, or disseminating regulations or assessments based upon science that is not transparent or reproducible.” If the bill is passed, EPA would be unable to base assessments or regulations on any information not “publicly available in a manner that is sufficient for independent analysis.” This would include all information published in scholarly journals based on data that is not available as open source.
The bill is based on the premise that forcing EPA to use public data will inspire greater transparency by allowing “the crowd” to conduct independent analysis and interpretation. While the premise of involving the public in scientific research is sound, this characterization of crowdsourcing as a process separate from traditional scientific research is deeply problematic.
This division contrasts the current practices of many researchers, who use crowdsourcing to directly involve the public in scientific processes. Galaxy Zoo, for example, enlists digital volunteers (called “citizen scientists”) help classify more than 40 million photographs of galaxies taken by the Hubble Telescope. These crowdsourced morphological classifications are a powerful form of data analysis, a key aspect of the scientific process. Galaxy Zoo then publishes a catalogue of these classifications as an open-source data set. And the data reduction techniques and measures of confidence and bias for the data catalogue are documented in MNRAS, a peer-reviewed journal. A recent Google Scholar search shows that the data set published in MNRAS has been cited a remarkable 121 times.
As this example illustrates, crowdsourcing is often embedded in the process of formal scientific research. But prior to being published in a scientific journal, the crowdsourced contributions of non-professional volunteers are subject to the scrutiny of professional scientists through the rigorous process of peer review. Because peer review was designed as an institution to ensure objective and unbiased research, peer-reviewed scientific work is widely accepted as the best source of information for any science-based decision.
Separating crowdsourcing from the peer review process, as this legislation intends, means that there will be no formal filters in place to ensure that open data will not be abused by special interests. Ellen Silbergeld, a professor at John Hopkins University who testified at the hearing this week, made exactly this point when she pointed to data manipulation commonly practiced by tobacco lobbyists in the United States.
Contributing to scientific research is one goal of crowdsourcing for science. Involving the public in scientific research also increases volunteer understanding of research topics and the scientific process and inspires heightened community engagement. These goals are supported by President Obama’s Second Open Government National Action Plan, which calls for “increased crowdsourcing and citizen science programs” to support “an informed and active citizenry.” But H.R. 4012 does not support these goals. Rather, this legislation could further degrade the public’s understanding of science by encouraging the public to distrust professional scientists rather than collaborate with them.
Crowdsourcing benefits organizations by bringing in the unique expertise held by external volunteers, which can augment and enhance the traditional scientific process. In return, these volunteers benefit from exposure to new and exciting processes, such as scientific research. This mutually beneficial relationship depends on collaboration, not opposition. Supporting an antagonistic relationship between science-based organizations like the EPA and members of “the crowd” will benefit neither institutions, nor volunteers, nor the country as a whole.“
what is impossible
impossible.com – is a new website and app that encourages people to do things for others for free. People can post wishes of things that they want or need help with and offer what they can give – can be things or skills. Impossible shows these wishes and offers and people can connect with one another. You can also create thank you posts to send people.
Managing Innovation in a Crowd
New paper by Acemoglu, Daron and Mostagir, Mohamed and Ozdaglar, Asuman E.: “Crowdsourcing is an emerging technology where innovation and production are sourced out to the public through an open call. At the center of crowdsourcing is a resource allocation problem: there is an abundance of workers but a scarcity of high skills, and an easy task assigned to a high-skill worker is a waste of resources. This problem is complicated by the fact that the exact difficulties of innovation tasks may not be known in advance, so tasks that require high-skill labor cannot be identified and allocated ahead of time. We show that the solution to this problem takes the form of a skill hierarchy, where tasks are first attempted by low-skill labor, and high skill workers only engage with a task if less skilled workers are unable to finish it. This hierarchy can be constructed and implemented in a decentralized manner even though neither the difficulties of the tasks nor the skills of the candidate workers are known. We provide a dynamic pricing mechanism that achieves this implementation by inducing workers to self-select into different layers. The mechanism is simple: each time a task is attempted and not finished, its price (reward upon completion) goes up.”