With real-time decisions, Citi Bike breaks the cycle of empty stations


Melanie Lefkowitz at Cornell Chronicle: “Cornell research has improved bike sharing in New York and other cities, providing tools to ensure bikes are available when and where they’re needed through a crowdsourcing system that uses real-time information to make decisions.

Citi Bike redistributes its bicycles around New York City using a program called Bike Angels, based on research by David Shmoys, the Laibe/Acheson Professor of Business Management and Leadership Studies in the School of Operations Research and Information Engineering.

Through Bike Angels, which Shmoys helped Citi Bike develop three years ago, cyclists earn points adding up to free rides and other prizes by using or returning bikes at certain high-need stations. Originally, Bike Angels awarded points for the same pattern of stations every morning, and a different fixed pattern each afternoon rush; now the program uses an algorithm that continually updates the pattern of stations for which users earn points.

“The ability to make decisions that are sensitive to exactly what are today’s conditions enables us to be much more effective in assigning those points,” said Shmoys, who is also associate director of Cornell’s Institute for Computational Sustainability.

With co-authors Hangil Chung ’18 and Daniel Freund, Ph.D. ’18, Shmoys wrote “Bike Angels: An Analysis of Citi Bike’s Incentive Program,” a detailed report showing the effectiveness of this approach. …(More)”.

‘To own or not to own?’ A study on the determinants and consequences of alternative intellectual property rights arrangements in crowdsourcing for innovation contests


Paper by Nuran Acur, Mariangela Piazza and Giovanni Perrone: “Firms are increasingly engaging in crowdsourcing for innovation to access new knowledge beyond their boundaries; however, scholars are no closer to understanding what guides seeker firms in deciding the level at which to acquire rights from solvers and the effect that this decision has on the performance of crowdsourcing contests.

Integrating Property Rights Theory and the problem solving perspective whist leveraging exploratory interviews and observations, we build a theoretical framework to examine how specific attributes of the technical problem broadcast affect the seekers’ choice between alternative intellectual property rights (IPR) arrangements that call for acquiring or licensing‐in IPR from external solvers (i.e. with high and low degrees of ownership respectively). Each technical problem differs in the knowledge required to be solved as well as in the stage of development it occurs of the innovation process and seeker firms pay great attention to such characteristics when deciding about the IPR arrangement they choose for their contests.

In addition, we analyze how this choice between acquiring and licensing‐in IPR, in turn, influences the performance of the contest. We empirically test our hypotheses analyzing a unique dataset of 729 challenges broadcast on the InnoCentive platform from 2010 to 2016. Our results indicate that challenges related to technical problems in later stages of the innovation process are positively related to the seekers’ preference toward IPR arrangements with a high level of ownership, while technical problems involving a higher number of knowledge domains are not.

Moreover, we found that IPR arrangements with a high level of ownership negatively affect solvers’ participation and that IPR arrangement plays a mediating role between the attributes of the technical problem and the solvers’ self‐selection process. Our article contributes to the open innovation and crowdsourcing literature and provides practical implications for both managers and contest organizers….(More)”.

JPMorgan is quietly building an IBM Watson-like platform


Frank Chaparro at BusinessInsider: “JPMorgan’s corporate and investment bank is best known for advising businesses on billion-dollar acquisitions, helping private unicorns tap into the public markets, and managing the cash of Fortune 500 companies.

But now it is quietly working on a new platform that would go far beyond anything the firm has previously done, using crowdsourcing to accumulate massive amounts of data intended to one day help its clients make complex decisions about how to run their businesses, according to people familiar with the project.

For JPMorgan’s clients like asset-management firms and hedge funds, it could provide new data sets to help investors squeeze out more alpha from their models or better price assets. But JPMorgan is looking to go beyond the buy side to help its large corporate clients as well. The platform could, for example, help retailers figure out where to build their next store, inform manufacturers about how to revamp systems in their factories, and improve logistics management for delivery services companies, the people said.

The platform, called Roar by JPMorgan, would store sensitive private data, such as hospital records or satellite imagery, that’s not in the public domain. Typically, this type of information is exchanged between firms on a bilateral arrangement so it is not improperly used. But Roar would allow clients to tap into this data, which they could then use in a secure fashion to make forecasts and gain business insights….

Right now, the platform is being tested internally with public data and JPMorgan is collaborating with academics to answer questions such as predicting traffic patterns or future air pollution….(More)”.

Citizen science, public policy


Paper by Christi J. GuerriniMary A. Majumder,  Meaganne J. Lewellyn, and Amy L. McGuire in Science: “Citizen science initiatives that support collaborations between researchers and the public are flourishing. As a result of this enhanced role of the public, citizen science demonstrates more diversity and flexibility than traditional science and can encompass efforts that have no institutional affiliation, are funded entirely by participants, or continuously or suddenly change their scientific aims.

But these structural differences have regulatory implications that could undermine the integrity, safety, or participatory goals of particular citizen science projects. Thus far, citizen science appears to be addressing regulatory gaps and mismatches through voluntary actions of thoughtful and well-intentioned practitioners.

But as citizen science continues to surge in popularity and increasingly engage divergent interests, vulnerable populations, and sensitive data, it is important to consider the long-term effectiveness of these private actions and whether public policies should be adjusted to complement or improve on them. Here, we focus on three policy domains that are relevant to most citizen science projects: intellectual property (IP), scientific integrity, and participant protections….(More)”.

Trust, Security, and Privacy in Crowdsourcing


Guest Editorial to Special Issue of IEEE Internet of Things Journal: “As we become increasingly reliant on intelligent, interconnected devices in every aspect of our lives, critical trust, security, and privacy concerns are raised as well.

First, the sensing data provided by individual participants is not always reliable. It may be noisy or even faked due to various reasons, such as poor sensor quality, lack of sensor calibration, background noise, context impact, mobility, incomplete view of observations, or malicious attacks. The crowdsourcing applications should be able to evaluate the trustworthiness of collected data in order to filter out the noisy and fake data that may disturb or intrude a crowdsourcing system. Second, providing data (e.g., photographs taken with personal mobile devices) or using IoT applications may compromise data providers’ personal data privacy (e.g., location, trajectory, and activity privacy) and identity privacy. Therefore, it becomes essential to assess the trust of the data while preserving the data providers’ privacy. Third, data analytics and mining in crowdsourcing may disclose the privacy of data providers or related entities to unauthorized parities, which lowers the willingness of participants to contribute to the crowdsourcing system, impacts system acceptance, and greatly impedes its further development. Fourth, the identities of data providers could be forged by malicious attackers to intrude the whole crowdsourcing system. In this context, trust, security, and privacy start to attract a special attention in order to achieve high quality of service in each step of crowdsourcing with regard to data collection, transmission, selection, processing, analysis and mining, as well as utilization.

Trust, security, and privacy in crowdsourcing receives increasing attention. Many methods have been proposed to protect privacy in the process of data collection and processing. For example, data perturbation can be adopted to hide the real data values during data collection. When preprocessing the collected data, data anonymization (e.g., k-anonymization) and fusion can be applied to break the links between the data and their sources/providers. In application layer, anonymity is used to mask the real identities of data sources/providers. To enable privacy-preserving data mining, secure multiparty computation (SMC) and homomorphic encryption provide options for protecting raw data when multiple parties jointly run a data mining algorithm. Through cryptographic techniques, no party knows anything else than its own input and expected results. For data truth discovery, applicable solutions include correlation-based data quality analysis and trust evaluation of data sources. But current solutions are still imperfect, incomprehensive, and inefficient….(More)”.

Open innovation and the evaluation of internet-enabled public services in smart cities


Krassimira Paskaleva and Ian Cooper in Technovation: This article is focused on public service innovation from an innovation management perspective. It presents research experience gained from a European project for managing social and technological innovation in the production and evaluation of citizen-centred internet-enabled services in the public sector.

It is based on six urban pilot initiatives, which sought to operationalise a new approach to co-producing and co-evaluating civic services in smart cities – commonly referred to as open innovation for smart city services. Research suggests that the evidence base underpinning this approach is not sufficiently robust to support claims being made about its effectiveness.

Instead evaluation research of citizen-centred internet-enabled urban services is in its infancy and there are no tested methods or tools in the literature for supporting this approach.

The paper reports on the development and trialing of a novel Co-evaluation Framework, indicators and reporting categories, used to support the co-production of smart city services in an EU-funded project. Our point of departure is that innovation of services is a sub-set of innovation management that requires effective integration of technological with social innovation, supported by the right skills and capacities. The main skills sets needed for effective co-evaluation of open innovation services are the integration of stakeholder management with evaluation capacities.”

The CrowdLaw Catalog


The GovLab: “The CrowdLaw Catalog is a growing repository of 100 CrowdLaw cases from around the world. The goal of the catalog is to help those wishing to start new or improve existing CrowdLaw projects to learn from one another.

Examples are tagged and searchable by four criteria:

  1. Level – What level of government is involved? Search by National, Regional, and/or Local
  2. Stage – At what stage of the law or policymaking process the participation take place? Search by Problem Identification, Solution Identification, Drafting, Decision Making, Implementation and/or Assessment
  3. Task – What are people being asked to contribute? Search by: Ideas, Expertise, Opinions, Evidence and/or Actions.
  4. Technology – What is the platform? Search by: Web, Mobile and/or Offline

The catalog offers brief descriptions of each initiative and links to additional resources….(More)”.

Prizes are a powerful spur to innovation and breakthroughs


John Thornhill in the Financial Times: “…All too often today we leave research and innovation in the hands of the so-called professionals, often with disappointing results. Winning a prize often matters less than the stimulus it provides for innovators in neighbouring fields In recent years, there has been an explosion in the number of professional scientists. Unesco estimates that there were 7.8m full-time researchers in 2013.

The number of scientific journals has also increased, making it difficult even for specialists to remain on top of all the latest advances in their field. In spite of this explosion of knowledge and research spending, there has been a striking lack of breakthrough innovations, as economists such as Robert Gordon and Tyler Cowen have noted.

Maybe this is because all the low-hanging technological fruit has been eaten. Or perhaps it is because our research and development methodology has gone awry.

Geoff Mulgan, chief executive of Nesta, is one of those who is trying to revive the concept of prizes as a means of encouraging innovation. His public foundation runs the Challenge Prize Centre, offering awards of up to £10m for innovation in the fields of energy and the environment, healthcare, and community wellbeing. “Setting a specific target, opening up to anyone to meet it, and providing a financial reward if they succeed is the opposite of how most R&D is done,” Mr Mulgan says. “We should all focus more on outcomes than inputs.”…
But these prizes are far from being a panacea. Indeed, they can sometimes lead to perverse results, encouraging innovators to fixate on just one, original goal while ignoring serendipitous surprises along the way. Many innovations are the happy byproduct of research rather than its primary outcome. An academic paper on the effectiveness of innovation prizes concluded that they could be a useful addition to the armoury but were no substitute for other proven forms of research and development. The authors also warned that if prizes were poorly designed, managed, and awarded they could prove “ineffective or even harmful”.

That makes it essential to design competitions in careful and precise detail. It also helps if there are periodic payouts along the way to encourage the most promising ideas. Many companies have embraced the concept of open innovation and increasingly look to collaborate with outside partners to develop fresh ideas, sometimes by means of corporate prizes….(More)”.

A platform that puts political lobbying back into the hands of everyday people


Michael Krumholtz at StartUpBeat: “Amit Thakkar saw first hand how messy and inefficient politics can be from the inside. While working as a political consultant for a decade, Thakkar said he became frustrated with seeing the same old players decide policy with almost no influence from actual constituents or voters.

That’s a large part of why he decided to create LawMaker.io, which bills itself as a revolutionary platform that gives those in the U.S. the chance to create propositions for new laws through crowdsourcing. That allows for support to build for popular ideas that are eventually handed over to legislators to propose them as real laws. Touting itself as a “free lobby for the lobbyless,” Thakkar said its a platform that could very much change the face of U.S. democracy.

“It didn’t make sense to me that such a small group of wealthy and well-connected people had such an outsized influence on the laws that are written and the way our government works,” he told Techli. “I knew there needed to be a free way that all Americans could propose common-sense ideas for laws and influence elected officials in a way that benefitted all Americans instead of just a powerful few.”

Lawmaker.io works by finding ideas at the ground level that can shape politics and then making sure it gets a wider audience after a user proposes a policy idea. It’s then shared widely by the user and suggestions are made for possible amendments to the initial proposal. Support is then gathered until the idea has at least 100 registered supporters and it is eventually sent off to the appropriate legislators.

LawMaker.io recently held its 2nd Lawmaker Challenge to offer up a winning policy proposal to legislators. As the Supreme Court’s Citizen United has become so influential in allowing big money to essentially buy politics, the winning proposal looked to reverse the impacts of the decision and shift back influence to voters over the power of wealthy interests….(More)”.

Can crowdsourcing scale fact-checking up, up, up? Probably not, and here’s why


Mevan Babakar at NiemanLab: “We foolishly thought that harnessing the crowd was going to require fewer human resources, when in fact it required, at least at the micro level, more.”….There’s no end to the need for fact-checking, but fact-checking teams are usually small and struggle to keep up with the demand. In recent months, organizations like WikiTribune have suggested crowdsourcing as an attractive, low-cost way that fact-checking could scale.

As the head of automated fact-checking at the U.K.’s independent fact-checking organization Full Fact, I’ve had a lot of time to think about these suggestions, and I don’t believe that crowdsourcing can solve the fact-checking bottleneck. It might even make it worse. But — as two notable attempts, TruthSquad and FactcheckEU, have shown — even if crowdsourcing can’t help scale the core business of fact checking, it could help streamline activities that take place around it.

Think of crowdsourced fact-checking as including three components: speed (how quickly the task can be done), complexity (how difficult the task is to perform; how much oversight it needs), and coverage (the number of topics or areas that can be covered). You can optimize for (at most) two of these at a time; the third has to be sacrificed.

High-profile examples of crowdsourcing like Wikipedia, Quora, and Stack Overflow harness and gather collective knowledge, and have proven that large crowds can be used in meaningful ways for complex tasks across many topics. But the tradeoff is speed.

Projects like Gender Balance (which asks users to identify the gender of politicians) and Democracy Club Candidates (which crowdsources information about election candidates) have shown that small crowds can have a big effect when it comes to simple tasks, done quickly. But the tradeoff is broad coverage.

At Full Fact, during the 2015 U.K. general election, we had 120 volunteers aid our media monitoring operation. They looked through the entire media output every day and extracted the claims being made. The tradeoff here was that the task wasn’t very complex (it didn’t need oversight, and we only had to do a few spot checks).

But we do have two examples of projects that have operated at both high levels of complexity, within short timeframes, and across broad areas: TruthSquad and FactCheckEU….(More)”.