Springwise: “Most cities were never designed to cater for the huge numbers of bikes seen on their roads every day, and as the number of cyclists grows, so do the fatality statistics thanks to limited investment in safe cycle paths. While Berlin already crowdsources bikers’ favorite cycle routes and maps them through the Dynamic Connections platform, a new app called WeCycle lets cyclists track their journeys, pooling their data to create heat maps for city planners.
Created by the UK’s TravelAI transport startup, WeCycle taps into the current consumer trend for quantifying every aspect of life, including journey times. By downloading the free iOS app, London cyclists can seamlessly create stats each time they get on their bike. They app runs in the background and uses the device’s accelerometer to smartly distinguish walking or running from cycling. They can then see how far they’ve traveled, how fast they cycle and every route they’ve taken. Additionally, the app also tracks bus and car travel.
Anyone that downloads the app agrees that their data can be anonymously sent to TravelAI, creating an accurate and real-time information resource. It aims to create tools such as heat maps and behavior monitoring for cities and local authorities to learn more about how citizens are using roads to better inform their transport policies.
WeCycle follows in the footsteps of similar apps such as Germany’s Radwende and the Toronto Cycling App — both released this year — in taking a popular trend and turning into data that could help make cities a safer place to cycle….Website: www.travelai.info“
Crowdteaching: Supporting Teaching as Designing in Collective Intelligence Communities
Paper by Mimi Recker, Min Yuan, and Lei Ye in the International Review of Research in Open and Distant Learning: “The widespread availability of high-quality Web-based content offers new potential for supporting teachers as designers of curricula and classroom activities. When coupled with a participatory Web culture and infrastructure, teachers can share their creations as well as leverage from the best that their peers have to offer to support a collective intelligence or crowdsourcing community, which we dub crowdteaching. We applied a collective intelligence framework to characterize crowdteaching in the context of a Web-based tool for teachers called the Instructional Architect (IA). The IA enables teachers to find, create, and share instructional activities (called IA projects) for their students using online learning resources. These IA projects can further be viewed, copied, or adapted by other IA users. This study examines the usage activities of two samples of teachers, and also analyzes the characteristics of a subset of their IA projects. Analyses of teacher activities suggest that they are engaging in crowdteaching processes. Teachers, on average, chose to share over half of their IA projects, and copied some directly from other IA projects. Thus, these teachers can be seen as both contributors to and consumers of crowdteaching processes. In addition, IA users preferred to view IA projects rather than to completely copy them. Finally, correlational results based on an analysis of the characteristics of IA projects suggest that several easily computed metrics (number of views, number of copies, and number of words in IA projects) can act as an indirect proxy of instructionally relevant indicators of the content of IA projects.”
DrivenData
DrivenData Blog: “As we begin launching our first competitions, we thought it would be a good idea to lay out what exactly we’re trying to do and why….
At DrivenData, we want to bring cutting-edge practices in data science and crowdsourcing to some of the world’s biggest social challenges and the organizations taking them on. We host online challenges, usually lasting 2-3 months, where a global community of data scientists competes to come up with the best statistical model for difficult predictive problems that make a difference.
Just like every major corporation today, nonprofits and NGOs have more data than ever before. And just like those corporations, they are trying to figure out how to make the best use of their data. We work with mission-driven organizations to identify specific predictive questions that they care about answering and can use their data to tackle.
Then we host the online competitions, where experts from around the world vie to come up with the best solution. Some competitors are experienced data scientists in the private sector, analyzing corporate data by day, saving the world by night, and testing their mettle on complex questions of impact. Others are smart, sophisticated students and researchers looking to hone their skills on real-world datasets and real-world problems. Still more have extensive experience with social sector data and want to bring their expertise to bear on new, meaningful challenges – with immediate feedback on how well their solution performs.
Like any data competition platform, we want to harness the power of crowds combined with the increasing prevalence of large, relevant datasets. Unlike other data competition platforms, our primary goal is to create actual, measurable, lasting positive change in the world with our competitions. At the end of each challenge, we work with the sponsoring organization to integrate the winning solutions, giving them the tools to drive real improvements in their impact….
We are launching soon and we want you to join us!
If you want to get updates about our launch this fall with exciting, real competitions, please sign up for our mailing list here and follow us on Twitter: @drivendataorg.
If you are a data scientist, feel free to create an account and start playing with our first sandbox competitions.
If you are a nonprofit or public sector organization, and want to squeeze every drop of mission effectiveness out of your data, check out the info on our site and let us know! “
Citizen Science: The Law and Ethics of Public Access to Medical Big Data
New Paper by Sharona Hoffman: “Patient-related medical information is becoming increasingly available on the Internet, spurred by government open data policies and private sector data sharing initiatives. Websites such as HealthData.gov, GenBank, and PatientsLikeMe allow members of the public to access a wealth of health information. As the medical information terrain quickly changes, the legal system must not lag behind. This Article provides a base on which to build a coherent data policy. It canvasses emergent data troves and wrestles with their legal and ethical ramifications.
Publicly accessible medical data have the potential to yield numerous benefits, including scientific discoveries, cost savings, the development of patient support tools, healthcare quality improvement, greater government transparency, public education, and positive changes in healthcare policy. At the same time, the availability of electronic personal health information that can be mined by any Internet user raises concerns related to privacy, discrimination, erroneous research findings, and litigation. This Article analyzes the benefits and risks of health data sharing and proposes balanced legislative, regulatory, and policy modifications to guide data disclosure and use.”
Business Models for Open Innovation: Matching Heterogenous Open Innovation Strategies with Business Model Dimensions
New paper by Saebi, Tina and Foss, Nicolai, available at SSRN: “Research on open innovation suggests that companies benefit differentially from adopting open innovation strategies; however, it is unclear why this is so. One possible explanation is that companies’ business models are not attuned to open strategies. Accordingly, we propose a contingency model of open business models by systematically linking open innovation strategies to core business model dimensions, notably the content, structure, governance of transactions. We further illustrate a continuum of open innovativeness, differentiating between four types of open business models. We contribute to the open innovation literature by specifying the conditions under which business models are conducive to the success of open innovation strategies.”
Bridging the Knowledge Gap: In Search of Expertise
New paper by Beth Simone Noveck, The GovLab, for Democracy: “In the early 2000s, the Air Force struggled with a problem: Pilots and civilians were dying because of unusual soil and dirt conditions in Afghanistan. The soil was getting into the rotors of the Sikorsky UH-60 helicopters and obscuring the view of its pilots—what the military calls a “brownout.” According to the Air Force’s senior design scientist, the manager tasked with solving the problem didn’t know where to turn quickly to get help. As it turns out, the man practically sitting across from him had nine years of experience flying these Black Hawk helicopters in the field, but the manager had no way of knowing that. Civil service titles such as director and assistant director reveal little about skills or experience.
In the fall of 2008, the Air Force sought to fill in these kinds of knowledge gaps. The Air Force Research Laboratory unveiled Aristotle, a searchable internal directory that integrated people’s credentials and experience from existing personnel systems, public databases, and users themselves, thus making it easy to discover quickly who knew and had done what. Near-term budgetary constraints killed Aristotle in 2013, but the project underscored a glaring need in the bureaucracy.
Aristotle was an attempt to solve a challenge faced by every agency and organization: quickly locating expertise to solve a problem. Prior to Aristotle, the DOD had no coordinated mechanism for identifying expertise across 200,000 of its employees. Dr. Alok Das, the senior scientist for design innovation tasked with implementing the system, explained, “We don’t know what we know.”
This is a common situation. The government currently has no systematic way of getting help from all those with relevant expertise, experience, and passion. For every success on Challenge.gov—the federal government’s platform where agencies post open calls to solve problems for a prize—there are a dozen open-call projects that never get seen by those who might have the insight or experience to help. This kind of crowdsourcing is still too ad hoc, infrequent, and unpredictable—in short, too unreliable—for the purposes of policy-making.
Which is why technologies like Aristotle are so exciting. Smart, searchable expert networks offer the potential to lower the costs and speed up the process of finding relevant expertise. Aristotle never reached this stage, but an ideal expert network is a directory capable of including not just experts within the government, but also outside citizens with specialized knowledge. This leads to a dual benefit: accelerating the path to innovative and effective solutions to hard problems while at the same time fostering greater citizen engagement.
Could such an expert-network platform revitalize the regulatory-review process? We might find out soon enough, thanks to the Food and Drug Administration…”
Agency Liability Stemming from Citizen-Generated Data
Paper by Bailey Smith for The Wilson Center’s Science and Technology Innovation Program: “New ways to gather data are on the rise. One of these ways is through citizen science. According to a new paper by Bailey Smith, JD, federal agencies can feel confident about using citizen science for a few reasons. First, the legal system provides significant protection from liability through the Federal Torts Claim Act (FTCA) and Administrative Procedures Act (APA). Second, training and technological innovation has made it easier for the non-scientist to collect high quality data.”
With Wikistrat, crowdsourcing gets geopolitical
Much to the surprise of western intelligence, in a matter of weeks Vladimir Putin’s troops would occupy the disputed peninsula and a referendum would be passed authorising secession from Ukraine.
That a dispersed team of thinkers – assembled by a consultancy known as Wikistrat – could out-forecast the world’s leading intelligence agencies seems almost farcical. But it is an eye-opening example of yet another way that crowdsourcing is upending conventional wisdom.
Crowdsourcing has long been heralded as a means to shake up stale thinking in corporate spheres by providing cheaper, faster means of processing information and problem solving. But now even traditionally enigmatic defence and intelligence organisations and other geopolitical soothsayers are getting in on the act by using the “wisdom of the crowd” to predict how the chips of world events might fall.
Meanwhile, companies with crucial geopolitical interests, such as energy and financial services firms, have begun commissioning crowdsourced simulations of their own from Wikistrat to better gauge investment risk.
While some intelligence agencies have experimented with crowdsourcing to gain insights from the general public, Wikistrat uses a “closed crowd” of subject experts and bills itself as the world’s first crowdsourced analytical services consultancy.
A typical simulation, run on its interactive web platform, has roughly 70 participants. The crowd’s expertise and diversity is combined with Wikistrat’s patented model of “collaborative competition” that rewards participants for the breadth and quality of their contributions. The process is designed to provide a fresh view and shatter the traditional confines of groupthink….”
Using Crowds for Evaluation Tasks: Validity by Numbers vs. Validity by Expertise
Paper by Christoph Hienerth and Frederik Riar: “Developing and commercializing novel ideas is central to innovation processes. As the outcome of such ideas cannot fully be foreseen, the evaluation of them is crucial. With the rise of the internet and ICT, more and new kinds of evaluations are done by crowds. This raises the question whether individuals in crowds possess necessary capabilities to evaluate and whether their outcomes are valid. As empirical insights are not yet available, this paper deals with the examination of evaluation processes and general evaluation components, the discussion of underlying characteristics and mechanism of these components affecting evaluation outcomes (i.e. evaluation validity). We further investigate differences between firm- and crowd-based evaluation using different cases of applications, and develop a theoretical framework towards evaluation validity, i.e. validity by numbers vs. the validity by expertise. The identified factors that influence the validity of evaluations are: (1) the number of evaluation tasks, (2) complexity, (3) expertise, (4) costs, and (5) time to outcome. For each of these factors, hypotheses are developed based on theoretical arguments. We conclude with implications, proposing a model of evaluation validity.”
City 72 Toolkit
“An effective preparedness platform customizable to your city. City72 is an open-source emergency preparedness platform that promotes community resilience and connection. This Toolkit is designed specifically for emergency preparedness organizations and provides the information and resources to create a customized City72 site for any city or region. It includes: how to create localized content, access to the code to build and install your City72 website, and tips for how to manage and promote your site.”