New workshop paper by C. A. Le Dantec in HCOMP 2014/Citizen + X: Workshop on Volunteer-based Crowdsourcing in Science, Public Health and Government, Pittsburgh, PA. November 2, 2014: “Within the past five years, a new form of technology -mediated public participation that experiments with crowdsourced data production in place of community discourse has emerged. Examples of this class of system include SeeClickFix, PublicStuff, and Street Bump, each of which mediate feedback about local neighborhood issues and help communities mobilize resources to address those issues. The experiments being playing out by this new class of services are derived from a form of public participation built on the ideas of smart cities where residents and physical environments are instrumented to provide data to improve operational efficiency and sustainability (Caragliu, Del Bo, and Nijkamp 2011). Ultimately, smart cities is the application to local government all the efficiencies that computing has always promised—efficiencies of scale, of productivity, of data—minus the messiness and contention of citizenship that play out through more traditional modes of public engagement and political discourse.
The question then, is what might it look like to incorporate more active forms of civic participation and issue advocacy in an app- and data-driven world? To begin to explore this question, my students and I have developed a smartphone app as part of a larger regional planning partnership with the City of Atlanta and the Atlanta Regional Commission. The app, called Cycle Atlanta, enables cyclists to record their ride data —where they have gone, why they went there, what kind of cyclist they are— in an effort to both generate data for planners developing new bicycling infrastructure and to broaden public participation and input in the creation of those plans…”
Proof: How Crowdsourced Election Monitoring Makes a Difference
Patrick Meier at iRevolution: “My colleagues Catie Bailard & Steven Livingston have just published the results of their empirical study on the impact of citizen-based crowdsourced election monitoring. Readers of iRevolution may recall that my doctoral dissertation analyzed the use of crowdsourcing in repressive environments and specifically during contested elections. This explains my keen interest in the results of my colleagues’ news data-driven study, which suggests that crowdsourcing does have a measurable and positive impact on voter turnout.
Catie and Steven are “interested in digitally enabled collective action initiatives” spearheaded by “nonstate actors, especially in places where the state is incapable of meeting the expectations of democratic governance.” They are particularly interested in measuring the impact of said initiatives. “By leveraging the efficiencies found in small, incremental, digitally enabled contributions (an SMS text, phone call, email or tweet) to a public good (a more transparent election process), crowdsourced elections monitoring constitutes [an] important example of digitally-enabled collective action.” To be sure, “the successful deployment of a crowdsourced elections monitoring initiative can generate information about a specific political process—information that would otherwise be impossible to generate in nations and geographic spaces with limited organizational and administrative capacity.”
To this end, their new study tests for the effects of citizen-based crowdsourced election monitoring efforts on the 2011 Nigerian presidential elections. More specifically, they analyzed close to 30,000 citizen-generated reports of failures, abuses and successes which were publicly crowdsourced and mapped as part of the Reclaim Naija project. Controlling for a number of factors, Catie and Steven find that the number and nature of crowdsourced reports is “significantly correlated with increased voter turnout.”
In conclusion, the authors argue that “digital technologies fundamentally change information environments and, by doing so, alter the opportunities and constraints that the political actors face.” This new study is an important contribution to the literature and should be required reading for anyone interested in digitally-enabled, crowdsourced collective action. Of course, the analysis focuses on “just” one case study, which means that the effects identified in Nigeria may not occur in other crowdsourced, election monitoring efforts. But that’s another reason why this study is important—it will no doubt catalyze future research to determine just how generalizable these initial findings are.”
Experiments on Crowdsourcing Policy Assessment
Paper by John Prpić, Araz Taeihagh, and James Melton Jr for the Oxford Internet Institute IPP2014: Crowdsourcing for Politics and Policy: “Can Crowds serve as useful allies in policy design? How do non-expert Crowds perform relative to experts in the assessment of policy measures? Does the geographic location of non-expert Crowds, with relevance to the policy context, alter the performance of non-experts Crowds in the assessment of policy measures? In this work, we investigate these questions by undertaking experiments designed to replicate expert policy assessments with non-expert Crowds recruited from Virtual Labor Markets. We use a set of ninety-six climate change adaptation policy measures previously evaluated by experts in the Netherlands as our control condition to conduct experiments using two discrete sets of non-expert Crowds recruited from Virtual Labor Markets. We vary the composition of our non-expert Crowds along two conditions: participants recruited from a geographical location directly relevant to the policy context and participants recruited at-large. We discuss our research methods in detail and provide the findings of our experiments.”
Full program of the Oxford Internet Institute IPP2014: Crowdsourcing for Politics and Policy can be found here.
Welcoming the Third Class of Presidential Innovation Fellows
You can learn more about this inspiring group of Fellows here.
Over the next 12 months, these innovators will collaborate and work with change agents inside government on three high-impact initiatives aimed at saving lives, saving taxpayer money, and fueling our economy. These initiatives include:
- Building a 21st Century Veterans Experience
- Unleashing the Power of Data Resources to Improve Americans’ Lives
- Crowdsourcing to Improve Government
Read more about the projects that make up these initiatives, and the previous successes the program has helped shape.
The fellows will be supported by 18F, an innovative group focused on the delivery of digital services across the federal government, and will work alongside the U.S. Digital Service and agency innovators in continuing to build a culture, and best practice within government….”
Journey tracking app will use cyclist data to make cities safer for bikes
Springwise: “Most cities were never designed to cater for the huge numbers of bikes seen on their roads every day, and as the number of cyclists grows, so do the fatality statistics thanks to limited investment in safe cycle paths. While Berlin already crowdsources bikers’ favorite cycle routes and maps them through the Dynamic Connections platform, a new app called WeCycle lets cyclists track their journeys, pooling their data to create heat maps for city planners.
Created by the UK’s TravelAI transport startup, WeCycle taps into the current consumer trend for quantifying every aspect of life, including journey times. By downloading the free iOS app, London cyclists can seamlessly create stats each time they get on their bike. They app runs in the background and uses the device’s accelerometer to smartly distinguish walking or running from cycling. They can then see how far they’ve traveled, how fast they cycle and every route they’ve taken. Additionally, the app also tracks bus and car travel.
Anyone that downloads the app agrees that their data can be anonymously sent to TravelAI, creating an accurate and real-time information resource. It aims to create tools such as heat maps and behavior monitoring for cities and local authorities to learn more about how citizens are using roads to better inform their transport policies.
WeCycle follows in the footsteps of similar apps such as Germany’s Radwende and the Toronto Cycling App — both released this year — in taking a popular trend and turning into data that could help make cities a safer place to cycle….Website: www.travelai.info“
Crowdteaching: Supporting Teaching as Designing in Collective Intelligence Communities
Paper by Mimi Recker, Min Yuan, and Lei Ye in the International Review of Research in Open and Distant Learning: “The widespread availability of high-quality Web-based content offers new potential for supporting teachers as designers of curricula and classroom activities. When coupled with a participatory Web culture and infrastructure, teachers can share their creations as well as leverage from the best that their peers have to offer to support a collective intelligence or crowdsourcing community, which we dub crowdteaching. We applied a collective intelligence framework to characterize crowdteaching in the context of a Web-based tool for teachers called the Instructional Architect (IA). The IA enables teachers to find, create, and share instructional activities (called IA projects) for their students using online learning resources. These IA projects can further be viewed, copied, or adapted by other IA users. This study examines the usage activities of two samples of teachers, and also analyzes the characteristics of a subset of their IA projects. Analyses of teacher activities suggest that they are engaging in crowdteaching processes. Teachers, on average, chose to share over half of their IA projects, and copied some directly from other IA projects. Thus, these teachers can be seen as both contributors to and consumers of crowdteaching processes. In addition, IA users preferred to view IA projects rather than to completely copy them. Finally, correlational results based on an analysis of the characteristics of IA projects suggest that several easily computed metrics (number of views, number of copies, and number of words in IA projects) can act as an indirect proxy of instructionally relevant indicators of the content of IA projects.”
DrivenData
DrivenData Blog: “As we begin launching our first competitions, we thought it would be a good idea to lay out what exactly we’re trying to do and why….
At DrivenData, we want to bring cutting-edge practices in data science and crowdsourcing to some of the world’s biggest social challenges and the organizations taking them on. We host online challenges, usually lasting 2-3 months, where a global community of data scientists competes to come up with the best statistical model for difficult predictive problems that make a difference.
Just like every major corporation today, nonprofits and NGOs have more data than ever before. And just like those corporations, they are trying to figure out how to make the best use of their data. We work with mission-driven organizations to identify specific predictive questions that they care about answering and can use their data to tackle.
Then we host the online competitions, where experts from around the world vie to come up with the best solution. Some competitors are experienced data scientists in the private sector, analyzing corporate data by day, saving the world by night, and testing their mettle on complex questions of impact. Others are smart, sophisticated students and researchers looking to hone their skills on real-world datasets and real-world problems. Still more have extensive experience with social sector data and want to bring their expertise to bear on new, meaningful challenges – with immediate feedback on how well their solution performs.
Like any data competition platform, we want to harness the power of crowds combined with the increasing prevalence of large, relevant datasets. Unlike other data competition platforms, our primary goal is to create actual, measurable, lasting positive change in the world with our competitions. At the end of each challenge, we work with the sponsoring organization to integrate the winning solutions, giving them the tools to drive real improvements in their impact….
We are launching soon and we want you to join us!
If you want to get updates about our launch this fall with exciting, real competitions, please sign up for our mailing list here and follow us on Twitter: @drivendataorg.
If you are a data scientist, feel free to create an account and start playing with our first sandbox competitions.
If you are a nonprofit or public sector organization, and want to squeeze every drop of mission effectiveness out of your data, check out the info on our site and let us know! “
Bridging the Knowledge Gap: In Search of Expertise
New paper by Beth Simone Noveck, The GovLab, for Democracy: “In the early 2000s, the Air Force struggled with a problem: Pilots and civilians were dying because of unusual soil and dirt conditions in Afghanistan. The soil was getting into the rotors of the Sikorsky UH-60 helicopters and obscuring the view of its pilots—what the military calls a “brownout.” According to the Air Force’s senior design scientist, the manager tasked with solving the problem didn’t know where to turn quickly to get help. As it turns out, the man practically sitting across from him had nine years of experience flying these Black Hawk helicopters in the field, but the manager had no way of knowing that. Civil service titles such as director and assistant director reveal little about skills or experience.
In the fall of 2008, the Air Force sought to fill in these kinds of knowledge gaps. The Air Force Research Laboratory unveiled Aristotle, a searchable internal directory that integrated people’s credentials and experience from existing personnel systems, public databases, and users themselves, thus making it easy to discover quickly who knew and had done what. Near-term budgetary constraints killed Aristotle in 2013, but the project underscored a glaring need in the bureaucracy.
Aristotle was an attempt to solve a challenge faced by every agency and organization: quickly locating expertise to solve a problem. Prior to Aristotle, the DOD had no coordinated mechanism for identifying expertise across 200,000 of its employees. Dr. Alok Das, the senior scientist for design innovation tasked with implementing the system, explained, “We don’t know what we know.”
This is a common situation. The government currently has no systematic way of getting help from all those with relevant expertise, experience, and passion. For every success on Challenge.gov—the federal government’s platform where agencies post open calls to solve problems for a prize—there are a dozen open-call projects that never get seen by those who might have the insight or experience to help. This kind of crowdsourcing is still too ad hoc, infrequent, and unpredictable—in short, too unreliable—for the purposes of policy-making.
Which is why technologies like Aristotle are so exciting. Smart, searchable expert networks offer the potential to lower the costs and speed up the process of finding relevant expertise. Aristotle never reached this stage, but an ideal expert network is a directory capable of including not just experts within the government, but also outside citizens with specialized knowledge. This leads to a dual benefit: accelerating the path to innovative and effective solutions to hard problems while at the same time fostering greater citizen engagement.
Could such an expert-network platform revitalize the regulatory-review process? We might find out soon enough, thanks to the Food and Drug Administration…”
With Wikistrat, crowdsourcing gets geopolitical
Much to the surprise of western intelligence, in a matter of weeks Vladimir Putin’s troops would occupy the disputed peninsula and a referendum would be passed authorising secession from Ukraine.
That a dispersed team of thinkers – assembled by a consultancy known as Wikistrat – could out-forecast the world’s leading intelligence agencies seems almost farcical. But it is an eye-opening example of yet another way that crowdsourcing is upending conventional wisdom.
Crowdsourcing has long been heralded as a means to shake up stale thinking in corporate spheres by providing cheaper, faster means of processing information and problem solving. But now even traditionally enigmatic defence and intelligence organisations and other geopolitical soothsayers are getting in on the act by using the “wisdom of the crowd” to predict how the chips of world events might fall.
Meanwhile, companies with crucial geopolitical interests, such as energy and financial services firms, have begun commissioning crowdsourced simulations of their own from Wikistrat to better gauge investment risk.
While some intelligence agencies have experimented with crowdsourcing to gain insights from the general public, Wikistrat uses a “closed crowd” of subject experts and bills itself as the world’s first crowdsourced analytical services consultancy.
A typical simulation, run on its interactive web platform, has roughly 70 participants. The crowd’s expertise and diversity is combined with Wikistrat’s patented model of “collaborative competition” that rewards participants for the breadth and quality of their contributions. The process is designed to provide a fresh view and shatter the traditional confines of groupthink….”
Using Crowds for Evaluation Tasks: Validity by Numbers vs. Validity by Expertise
Paper by Christoph Hienerth and Frederik Riar: “Developing and commercializing novel ideas is central to innovation processes. As the outcome of such ideas cannot fully be foreseen, the evaluation of them is crucial. With the rise of the internet and ICT, more and new kinds of evaluations are done by crowds. This raises the question whether individuals in crowds possess necessary capabilities to evaluate and whether their outcomes are valid. As empirical insights are not yet available, this paper deals with the examination of evaluation processes and general evaluation components, the discussion of underlying characteristics and mechanism of these components affecting evaluation outcomes (i.e. evaluation validity). We further investigate differences between firm- and crowd-based evaluation using different cases of applications, and develop a theoretical framework towards evaluation validity, i.e. validity by numbers vs. the validity by expertise. The identified factors that influence the validity of evaluations are: (1) the number of evaluation tasks, (2) complexity, (3) expertise, (4) costs, and (5) time to outcome. For each of these factors, hypotheses are developed based on theoretical arguments. We conclude with implications, proposing a model of evaluation validity.”