The European Lead Factory: Collective intelligence and cooperation to improve patients’ lives


Press Release: “While researchers from small and medium-sized companies and academic institutions often have enormous numbers of ideas, they don’t always have enough time or resources to develop them all. As a result, many ideas get left behind because companies and academics typically have to focus on narrow areas of research. This is known as the “Innovation Gap”. ESCulab (European Screening Centre: unique library for attractive biology) aims to turn this problem into an opportunity by creating a comprehensive library of high-quality compounds. This will serve as a basis for testing potential research targets against a wide variety of compounds.

Any researcher from a European academic institution or a small to medium-sized enterprise within the consortium can apply for a screening of their potential drug target. If a submitted target idea is positively assessed by a committee of experts it will be run through a screening process and the submitting party will receive a dossier of up to 50 potentially relevant substances that can serve as starting points for further drug discovery activities.

ESCulab will build Europe’s largest collaborative drug discovery platform and is equipped with a total budget of € 36.5 million: Half is provided by the European Union’s Innovative Medicines Initiative (IMI) and half comes from in-kind contributions from companies of the European Federation of Pharmaceutical Industries an Associations (EFPIA) and the Medicines for Malaria Venture. It builds on the existing library of the European Lead Factory , which consists of around 200,000 compounds, as well as around 350,000 compounds from EFPIA companies. The European Lead Factory aims to initiate 185 new drug discovery projects through the ESCulab project by screening drug targets against its library.

… The platform has already provided a major boost for drug discovery in Europe and is a strong example of how crowdsourcing, collective intelligence and the cooperation within the IMI framework can create real value for academia, industry, society and patients….(More)”

The 100 Questions Initiative: Sourcing 100 questions on key societal challenges that can be answered by data insights


100Q Screenshot

Press Release: “The Governance Lab at the NYU Tandon School of Engineering announced the launch of the 100 Questions Initiative — an effort to identify the most important societal questions whose answers can be found in data and data science if the power of data collaboratives is harnessed.

The initiative, launched with initial support from Schmidt Futures, seeks to address challenges on numerous topics, including migration, climate change, poverty, and the future of work.

For each of these areas and more, the initiative will seek to identify questions that could help unlock the potential of data and data science with the broader goal of fostering positive social, environmental, and economic transformation. These questions will be sourced by leveraging “bilinguals” — practitioners across disciplines from all over the world who possess both domain knowledge and data science expertise.

The 100 Questions Initiative starts by identifying 10 key questions related to migration. These include questions related to the geographies of migration, migrant well-being, enforcement and security, and the vulnerabilities of displaced people. This inaugural effort involves partnerships with the International Organization for Migration (IOM) and the European Commission, both of which will provide subject-matter expertise and facilitation support within the framework of the Big Data for Migration Alliance (BD4M).

“While there have been tremendous efforts to gather and analyze data relevant to many of the world’s most pressing challenges, as a society, we have not taken the time to ensure we’re asking the right questions to unlock the true potential of data to help address these challenges,” said Stefaan Verhulst, co-founder and chief research and development officer of The GovLab. “Unlike other efforts focused on data supply or data science expertise, this project seeks to radically improve the set of questions that, if answered, could transform the way we solve 21st century problems.”

In addition to identifying key questions, the 100 Questions Initiative will also focus on creating new data collaboratives. Data collaboratives are an emerging form of public-private partnership that help unlock the public interest value of previously siloed data. The GovLab has conducted significant research in the value of data collaboration, identifying that inter-sectoral collaboration can both increase access to information (e.g., the vast stores of data held by private companies) as well as unleash the potential of that information to serve the public good….(More)”.

Virtual Briefing at the Supreme Court


Paper by Alli Orr Larsen and Jeffrey L. Fisher: “The open secret of Supreme Court advocacy in a digital era is that there is a new way to argue to the Justices. Today’s Supreme Court arguments are developed online: They are dissected and explored in blog posts, fleshed out in popular podcasts, and analyzed and re-analyzed by experts who do not represent parties or have even filed a brief in the case at all. This “virtual briefing” (as we call it) is intended to influence the Justices and their law clerks but exists completely outside of traditional briefing rules. This article describes virtual briefing and makes a case that the key players inside the Court are listening. In particular, we show that the Twitter patterns of law clerks indicate they are paying close attention to producers of virtual briefing, and threads of these arguments (proposed and developed online) are starting to appear in the Court’s decisions.

We argue that this “crowdsourcing” dynamic to Supreme Court decision-making is at least worth a serious pause. There is surely merit to enlarging the dialogue around the issues the Supreme Court decides – maybe the best ideas will come from new voices in the crowd. But the confines of the adversarial process have been around for centuries, and there are significant risks that come with operating outside of it particularly given the unique nature and speed of online discussions. We analyze those risks in this article and suggest it is time to think hard about embracing virtual briefing — truly assessing what can be gained and what will be lost along the way….(More)”.

Crowdsourcing Research Questions? Leveraging the Crowd’s Experiential Knowledge for Problem Finding


Paper by Tiare-Maria Brasseur, Susanne Beck, Henry Sauermann, Marion Poetz: “Recently, both researchers and policy makers have become increasingly interested in involving the general public (i.e., the crowd) in the discovery of new science-based knowledge. There has been a boom of citizen science/crowd science projects (e.g., Foldit or Galaxy Zoo) and global policy aspirations for greater public engagement in science (e.g., Horizon Europe). At the same time, however, there are also criticisms or doubts about this approach. Science is complex and laypeople often do not have the appropriate knowledge base for scientific judgments, so they rely on specialized experts (i.e., scientists) (Scharrer, Rupieper, Stadtler & Bromme, 2017). Given these two perspectives, there is no consensus on what the crowd can do and what only researchers should do in scientific processes yet (Franzoni & Sauermann, 2014). Previous research demonstrates that crowds can be efficiently and effectively used in late stages of the scientific research process (i.e., data collection and analysis). We are interested in finding out what crowds can actually contribute to research processes that goes beyond data collection and analysis. Specifically, this paper aims at providing first empirical insights on how to leverage not only the sheer number of crowd contributors, but also their diversity in experience for early phases of the research process (i.e., problem finding). In an online and field experiment, we develop and test suitable mechanisms for facilitating the transfer of the crowd’s experience into scientific research questions. In doing so, we address the following two research questions: 1. What factors influence crowd contributors’ ability to generate research questions? 2. How do research questions generated by crowd members differ from research questions generated by scientists in terms of quality? There are strong claims about the significant potential of people with experiential knowledge, i.e., sticky problem knowledge derived from one’s own practical experience and practices (Collins & Evans, 2002), to enhance the novelty and relevance of scientific research (e.g., Pols, 2014). Previous evidence that crowds with experiential knowledge (e.g., users in Poetz & Schreier, 2012) or ?outsiders?/nonobvious individuals (Jeppesen & Lakhani, 2010) can outperform experts under certain conditions by having novel perspectives, support the assumption that the participation of non-scientists (i.e., crowd members) in scientific problem-finding might complement scientists’ lack of experiential knowledge. Furthermore, by bringing in exactly these new perspectives, they might help overcome problems of fixation/inflexibility in cognitive-search processes among scientists (Acar & van den Ende, 2016). Thus, crowd members with (higher levels of) experiential knowledge are expected to be superior in identifying very novel and out-of-the-box research problems with high practical relevance, as compared to scientists. However, there are clear reasons to be skeptical: despite their advantage to possess important experiential knowledge, the crowd lacks the scientific knowledge we assume to be required to formulate meaningful research questions. To study exactly how the transfer of crowd members’ experiential knowledge into science can be facilitated, we conducted two experimental studies in context of traumatology (i.e., research on accidental injuries). First, we conducted a large-scale online experiment (N=704) in collaboration with an international crowdsourcing platform to test the effect of two facilitating treatments on crowd members’ ability to formulate real research questions (study 1). We used a 2 (structuring knowledge/no structuring knowledge) x 2 (science knowledge/no science knowledge) between-subject experimental design. Second, we tested the same treatments in the field (study 2), i.e., in a crowdsourcing project in collaboration with LBG Open Innovation in Science Center. We invited patients, care takers and medical professionals (e.g., surgeons, physical therapists or nurses) concerned with accidental injuries to submit research questions using a customized online platform (https://tell-us.online/) to investigate the causal relationship between our treatments and different types and levels of experiential knowledge (N=118). An international jury of experts (i.e., journal editors in the field of traumatology) then assesses the quality of submitted questions (from the online and field experiment) along several quality dimensions (i.e., clarity, novelty, scientific impact, practical impact, feasibility) in an online evaluation process. To assess the net effect of our treatments, we further include a random sample of research questions obtained from early-stage research papers (i.e., conference papers) into the expert evaluation (blind to the source) and compare them with the baseline groups of our experiments. We are currently finalizing the data collection…(More)”.

How Cold Is That Library? There’s a Google Doc for That


Colleen Flaherty at Inside Higher Ed: “What a difference preparation makes when it comes to doing research in Arctic-level air-conditioned academic libraries (or ones that are otherwise freezing — or not air-conditioned at all). Luckily, Megan L. Cook, assistant professor of English at Colby College, published a crowdsourced document called“How Cold Is that Library?” ….

Cook, who was not immediately available for comment, has said the document was group effort. Juliet Sperling, a faculty fellow in American art at Colby, credited her colleague’s “brilliance” but said the document was “generally inspired by conversations we’ve had as co-fellows” in the Andrew W. Mellon Society of Fellows in Critical Bibliography. The society brings together 60-some scholars of rare books and material texts from a variety of disciplinary or institutional approaches, she said, “so collectively, we’ve all spent quite a bit of time in libraries of various climates all over the world.” In addition to library temperatures, lighting and even humidity levels, the scholars trade research destinations’ photo policies and nearby eateries and drinking holes, among other tips. A spreadsheet opens up that resource to others, Sperling said. …(More)”.

Finding Wisdom in Politically Polarized Crowds


Eamon Duede at Nature Research: “We were seeing that the consumption of ideas seemed deeply related io political alignment, and because our group (Knowledge Lab) is concerned with understanding the social dynamics involved in production of ideas, we began wondering whether and to what extent the political alignment of individuals contributes to a group’s ability to produce knowledge. A Wikipedia article is full of smuggled content and worked into a narrative by a diverse team of editors. Because those articles constitute knowledge, we were curious to know whether political polarization within those teams had an effect on the quality of that production. So, we decided to braid both strands of research together and look at the way in which individual political alignments and the polarization of the teams they form affect the quality of the work that is produced collaboratively on Wikipedia.

To answer this question, we turned not to the article itself, but the immense history of articles on Wikipedia. Every edit to every article, no matter how insignificant, is documented and saved in Wikipedia’s astonishingly massive archives. And every edit to every article, no matter how insignificant, is evaluated for its relevance or validity by the vast community of editors, both robotic and human. Remarkable teamwork has gone into producing the encyclopedia. Some people edit randomly, simply cleaning typos, adding citations, or contributing graffiti and vandalism (I’ve experimented with this, and it gets painted over very quickly, no matter where you put it). Yet, many people are genuinely purposeful in their work, and contribute specifically to topics on which they have both interest and knowledge. They tend and grow a handful of articles or a few broad topics like gardeners. We walked through the histories of these gardens, looking back at who made contributions here and there, how much they contributed, and where. We thought that editors who make frequent contributions to pages associated with American liberalism would hold left leaning opinions, and for conservatism opinions on the right. This was a controversial hypothesis, and many in the Wikipedia community felt that perhaps the opposite would be true, with liberals correcting conservative pages and conservatives kindly returning the favor -like weeding or applying pesticide. But a survey we conducted of active Wikipedia editors found that building a function over the relative number of bits they contributed to liberal versus conservative pages predicted more than a third of the probability that they identified as such and voted accordingly.

Following this validation, we assigned a political alignment score to hundreds of thousands of editors by looking at where they make contributions, and then examined the polarization within teams of editors that produced hundreds of thousands of Wikipedia articles in the broad topic areas of politics, social issues, and science. We found that when most members of a team have the same political alignment, whether conservative, liberal, or “independent”, the quality of the Wikipedia pages they produce is not as strong as those of teams with polarized compositions of editors (Shi et al. 2019).

The United States Senate is increasingly polarized, but largely balanced in its polarization. If the Senate was trying to write a Wikipedia article, would they produce a high quality article? If they are doing so on Wikipedia, following norms of civility and balance inscribed within Wikipedia’s policies and guidelines, committed to the production of knowledge rather than self-promotion, then the answer is probably “yes”. That is a surprising finding. We think that the reason for this is that the policies of Wikipedia work to suppress the kind of rhetoric and sophistry common in everyday discourse, not to mention toxic language and name calling. Wikipedia’s policies are intolerant of discussion that could distort balanced consideration of the edit and topic under consideration, and, given that these policies shut down discourse that could bias proposed edits, teams with polarized viewpoints have to spend significantly more time discussing and debating the content that is up for consideration for inclusion in an article. These diverse viewpoints seem to bring out points and arguments between team members that sharpen and refine the quality of the content they can collectively agree to. With assumptions and norms of respect and civility, political polarization can be powerful and generative….(More)”

Crowdsourcing in medical research: concepts and applications


Paper by Joseph D. Tucker, Suzanne Day, Weiming Tang, and Barry Bayus: “Crowdsourcing shifts medical research from a closed environment to an open collaboration between the public and researchers. We define crowdsourcing as an approach to problem solving which involves an organization having a large group attempt to solve a problem or part of a problem, then sharing solutions. Crowdsourcing allows large groups of individuals to participate in medical research through innovation challenges, hackathons, and related activities. The purpose of this literature review is to examine the definition, concepts, and applications of crowdsourcing in medicine.

This multi-disciplinary review defines crowdsourcing for medicine, identifies conceptual antecedents (collective intelligence and open source models), and explores implications of the approach. Several critiques of crowdsourcing are also examined. Although several crowdsourcing definitions exist, there are two essential elements: (1) having a large group of individuals, including those with skills and those without skills, propose potential solutions; (2) sharing solutions through implementation or open access materials. The public can be a central force in contributing to formative, pre-clinical, and clinical research. A growing evidence base suggests that crowdsourcing in medicine can result in high-quality outcomes, broad community engagement, and more open science….(More)”

Crowdsourcing a Constitution


Case Study by Cities of Service: “Mexico City was faced with a massive task: drafting a constitution. Mayor Miguel Ángel Mancera, who oversaw the drafting and adoption of the 212-page document, hoped to democratize the process. He appointed a drafting committee made up of city residents and turned to the Laboratório para la Ciudad (LabCDMX) to engage everyday citizens. LabCDMX conducted a comprehensive survey and employed the online platform Change.org to solicit ideas for the new constitution. Several petitioners without a legal or political background seized on the opportunity and made their voices heard with successful proposals on topics like green space, waterway recuperation, and LGBTI rights in a document that will have a lasting impact on Mexico City’s governance….(More)”.

Crowdsourced reports could save lives when the next earthquake hits


Charlotte Jee at MIT Technology Review: “When it comes to earthquakes, every minute counts. Knowing that one has hit—and where—can make the difference between staying inside a building and getting crushed, and running out and staying alive. This kind of timely information can also be vital to first responders.

However, the speed of early warning systems varies from country to country. In Japan  and California, huge networks of sensors and seismic stations can alert citizens to an earthquake. But these networks are expensive to install and maintain. Earthquake-prone countries such as Mexico and Indonesia don’t have such an advanced or widespread system.

A cheap, effective way to help close this gap between countries might be to crowdsource earthquake reports and combine them with traditional detection data from seismic monitoring stations. The approach was described in a paper in Science Advances today.

The crowdsourced reports come from three sources: people submitting information using LastQuake, an app created by the Euro-Mediterranean Seismological Centre; tweets that refer to earthquake-related keywords; and the time and IP address data associated with visits to the EMSC website.

When this method was applied retrospectively to earthquakes that occurred in 2016 and 2017, the crowdsourced detections on their own were 85% accurate. Combining the technique with traditional seismic data raised accuracy to 97%. The crowdsourced system was faster, too. Around 50% of the earthquake locations were found in less than two minutes, a whole minute faster than with data provided only by a traditional seismic network.

When EMSC has identified a suspected earthquake, it sends out alerts via its LastQuake app asking users nearby for more information: images, videos, descriptions of the level of tremors, and so on. This can help assess the level of damage for early responders….(More)”.

Advancing Computational Biology and Bioinformatics Research Through Open Innovation Competitions


HBR Working Paper by Andrea Blasco et al: “Open data science and algorithm development competitions offer a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research where the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research….(More)”.