Citizen Science in the Unexplored Terrain of the Brain


Aaron Krol at BioITWorld: “The game is simple. On the left-hand side of the screen you see a cube containing a misshapen 3D figure, a bit like a tree branch with a gall infestation. To the right is a razor-thin cross-section of the cube, a grainy image of overlapping gray blobs. Clicking on a blob colors it in, like using the paint bucket tool in MS Paint, while also sending colorful extensions out from the branch to the left. Working your way through 256 of these cross-sections, your job is to extend the branch through the cube, identifying which blobs are continuous with the branch and which are nearby distractions.
It hardly sounds like a game at all, but strange to say, there’s something very compelling about playing EyeWire. Maybe it’s watching the branches grow and fork as you discover new connections. Maybe it’s how quickly you can rack up progress, almost not noticing time go by as you span your branches through cube after cube.
“It draws you in,” says Nikitas Serafetinidis ― or Nseraf, as he’s known in-game. “There’s an unexplained component that makes this game highly addictive.”
Serafetinidis is the world record holder in EyeWire, a game whose players are helping to build a three-dimensional map of brain cells in the retina. The images in EyeWire are in fact photos taken with an electron microscope at the Max Planck Institute of Medical Research in Heidelberg: each one represents a tiny sliver of a mouse’s retina, just 20 nanometers thick. The “blobs” are thin slices of closely adjoined neurons, and the “branch” shows the path of a single cell, which can cross through hundreds of thousands of those images….(More)”

Facebook’s Filter Study Raises Questions About Transparency


Will Knight in MIT Technology Review: “Facebook is an enormously valuable source of information about social interactions.

Facebook’s latest scientific research, about the way it shapes the political perspectives users are exposed to, has led some academics to call for the company to be more open about what it chooses to study and publish.

This week the company’s data science team published a paper in the prominent journal Science confirming what many had long suspected: that the network’s algorithms filter out some content that might challenge a person’s political leanings. However, the paper also suggested that the effect was fairly small, and less significant than a user’s own filtering behavior (see “Facebook Says You Filter News More Than Its Algorithm Does”).
Several academics have pointed to limitations of the study, such as the fact that the only people involved had indicated their political affiliation on their Facebook page. Critics point out that those users might behave in a different way from everyone else. But beyond that, a few academics have noted a potential tension between Facebook’s desire to explore the scientific value of its data and its own corporate interests….

In response to the controversy over that study, Facebook’s chief technology officer, Mike Schroepfer, wrote a Facebook post that acknowledged people’s concerns and described new guidelines for its scientific research. “We’ve created a panel including our most senior subject-area researchers, along with people from our engineering, research, legal, privacy and policy teams, that will review projects falling within these guidelines,” he wrote….(More)

Toward a Research Agenda on Opening Governance


Members of the MacArthur Research Network on Opening Governance at Medium: “Society is confronted by a number of increasingly complex problems — inequality, climate change, access to affordable healthcare — that often seem intractable. Existing societal institutions, including government agencies, corporations and NGOs, have repeatedly proven themselves unable to tackle these problems in their current composition. Unsurprisingly, trust in existing institutions is at an all-time low.

At the same time, advances in technology and sciences offer a unique opportunity to redesign and reinvent our institutions. Increased access to data may radically transform how we identify problems and measure progress. Our capacity to connect with citizens could greatly increase the knowledge and expertise available to solve big public problems. We are witnessing, in effect, the birth of a new paradigm of governance — labeled “open governance” — where institutions share and leverage data, pursue collaborative problem-solving, and partner with citizens to make better decisions. All of these developments offer a potential solution to the crisis of trust and legitimacy confronting existing institutions.

But for the promise of open governance, we actually know very little about its true impact, and about the conditions and contingencies required for institutional innovation to really work. Even less is known about the capabilities that institutions must develop in order to be able to take advantage of new technologies and innovative practices. The lack of evidence is holding back positive change. It is limiting our ability to improve people’s lives.

The MacArthur Foundation Research Network on Opening Governance seeks to address these shortcomings. Convened and organized by the GovLab, and made possible by a three-year, $5 million grant from the John D. and Catherine T. MacArthur Foundation, the Network seeks to build an empirical foundation that will help us understand how democratic institutions are being (and should be) redesigned, and how this in turn influences governance. At its broadest level, the Network seeks to create a new science of institutional innovation.

In what follows, we outline a research agenda and a set of deliverables for the coming years that can deepen our understanding of “open governance.” More specifically the below seeks:

  • to frame and contextualize the areas of common focus among the members;
  • to guide the targeted advancement of Network activities;
  • to catalyze opportunities for further collaboration and knowledge exchange between Network members and those working in the field at large.

A core objective of the Network is to conduct research based on, and that has relevance for, real-world institutions. Any research that is solely undertaken in the lab, far from the actual happenings the Network seeks to influence and study, is deemed to be insufficient. As such, the Network is actively developing flexible, scalable methodologies to help analyze the impact of opening governance. In the spirit of interdisciplinarity and openness that defines the Network, these methodologies are being developed collaboratively with partners from diverse disciplines.

The below seeks to provide a framework for those outside the Network — including those who would not necessarily characterize their research as falling under the banner of opening governance — to undertake empirical, agile research into the redesign and innovation of governance processes and the solving of public problems….(More)”

Global Diseases, Collective Solutions


New paper by Ben Ramalingam: “Environmental disruption, mass urbanization and the runaway globalization of trade and transport have created ideal conditions for infectious diseases to emerge and spread around the world. Rapid spill-overs from local into regional and global crises reveal major gaps in the global system for dealing with infectious diseases.

A number of Global Solution Networks have emerged that address failures of systems, of institutions and of markets. At their most ambitious, they aim to change the rules of the global health game—opening up governance structures, sharing knowledge and science, developing new products, creating markets—all with the ultimate aim of preventing and treating diseases, and saving lives.

These networks have emerged in an ad-hoc and opportunistic fashion. More strategic thinking and investment is needed to build networking competencies and to identify opportunities for international institutions to best leverage new forms of collaboration and partnership. (Read the paper here).”

Data Fusion Heralds City Attractiveness Ranking


Emerging Technology From the arXiv: “The ability of any city to attract visitors is an important metric for town planners, businesses based on tourism, traffic planners, residents, and so on. And there are increasingly varied ways of measuring this thanks to the growing volumes of city-related data generated by with social media, and location-based data.

So it’s only natural that researchers would like to draw these data sets together to see what kind of insight they can get from this form of data fusion.

And so it has turned out thanks to the work of Stanislav Sobolevsky at MIT and a few buddies. These guys have fused three wildly different data sets related to the attractiveness of a city that allows them to rank these places and to understand why people visit them and what they do when they get there.

The work focuses exclusively on cities in Spain using data that is relatively straightforward to gather. The first data set consists of the number of credit and debit card transactions carried out by visitors to cities throughout Spain during 2011. This includes each card’s country of origin, which allows Sobolevsky and co to count only those transactions made by foreign visitors—a total of 17 million anonymized transactions from 8.6 million foreign visitors from 175 different countries.

The second data set consists of over 3.5 million photos and videos taken in Spain and posted to Flickr by people living in other countries. These pictures were taken between 2005 and 2014 by 16,000 visitors from 112 countries.

The last data set consists of around 700,000 geotagged tweets posted in Spain during 2012. These were posted by 16,000 foreign visitors from 112 countries.

Finally, the team defined a city’s attractiveness, at least for the purposes of this study, as the total number of pictures, tweets and card transactions that took place within it……

That’s interesting work that shows how the fusion of big data sets can provide insights into the way people use cities.   It has its limitations of course. The study does not address the reasons why people find cities attractive and what draws them there in the first place. For example, are they there for tourism, for business, or for some other reason. That would require more specialized data.

But it does provide a general picture of attractiveness that could be a start for more detailed analyses. As such, this work is just a small part of a new science of cities based on big data, but one that shows how much is becoming possible with just a little number crunching.

Ref: arxiv.org/abs/1504.06003 : Scaling of city attractiveness for foreign visitors through big data of human economic and social media activity”

Preparing for Responsible Sharing of Clinical Trial Data


Paper by Michelle M. Mello et al in the New England Journal of Medicine: “Data from clinical trials, including participant-level data, are being shared by sponsors and investigators more widely than ever before. Some sponsors have voluntarily offered data to researchers, some journals now require authors to agree to share the data underlying the studies they publish, the Office of Science and Technology Policy has directed federal agencies to expand public access to data from federally funded projects, and the European Medicines Agency (EMA) and U.S. Food and Drug Administration (FDA) have proposed the expansion of access to data submitted in regulatory applications. Sharing participant-level data may bring exciting benefits for scientific research and public health but may also have unintended consequences. Thus, expanded data sharing must be pursued thoughtfully.

We provide a suggested framework for broad sharing of participant-level data from clinical trials and related technical documents. After reviewing current data-sharing initiatives, potential benefits and risks, and legal and regulatory implications, we propose potential governing principles and key features for a system of expanded access to participant-level data and evaluate several governance structures….(More)”

Open Trials


Open Knowledge today announced plans to develop Open Trials, an open, online database of information about the world’s clinical research trials funded by The Laura and John Arnold Foundation. The project, which is designed to increase transparency and improve access to research, will be directed by Dr. Ben Goldacre, an internationally known leader on clinical transparency.

logo-black-CMYK

Open Trials will aggregate information from a wide variety of existing sources in order to provide a comprehensive picture of the data and documents related to all trials of medicines and other treatments around the world. Conducted in partnership with the Center for Open Science and supported by the Center’s Open Science Framework, the project will also track whether essential information about clinical trials is transparent and publicly accessible so as to improve understanding of whether specific treatments are effective and safe.

“There have been numerous positive statements about the need for greater transparency on information about clinical trials, over many years, but it has been almost impossible to track and audit exactly what is missing,” Dr. Goldacre, the project’s Chief Investigator and a Senior Clinical Research Fellow in the Centre for Evidence Based Medicine at the University of Oxford, explained. “This project aims to draw together everything that is known around each clinical trial. The end product will provide valuable information for patients, doctors, researchers, and policymakers—not just on individual trials, but also on how whole sectors, researchers, companies, and funders are performing. It will show who is failing to share information appropriately, who is doing well, and how standards can be improved.”

Patients, doctors, researchers, and policymakers use the evidence from clinical trials to make informed decisions about which treatments are best. But studies show that roughly half of all clinical trial results are not published, with positive results published twice as often as negative results. In addition, much of the important information about the methods and findings of clinical trials is only made available outside the normal indexes of academic journals….

Open Trials will help to automatically identify which trial results have not been disclosed by matching registry data on trials that have been conducted against documents containing trial results. This will facilitate routine public audit of undisclosed results. It will also improve discoverability of other documents around clinical trials, which will be indexed and, in some cases, hosted. Lastly, it will help improve recruitment for clinical trials by making information and commentary on ongoing trials more accessible….(More)”

A Process Model for Crowdsourcing Design: A Case Study in Citizen Science


Chapter by Kazjon Grace et al in Design Computing and Cognition ’14: “Crowdsourcing design has been applied in various areas of graphic design, software design, and product design. This paper draws on those experiences and research in diversity, creativity and motivation to present a process model for crowdsourcing experience design. Crowdsourcing experience design for volunteer online communities serves two purposes: to increase the motivation of participants by making them stakeholders in the success of the project, and to increase the creativity of the design by increasing the diversity of expertise beyond experts in experience design. Our process model for crowdsourcing design extends the meta-design architecture, where for online communities is designed to be iteratively re-designed by its users. We describe how our model has been deployed and adapted to a citizen science project where nature preserve visitors can participate in the design of a system called NatureNet. The major contribution of this paper is a model for crowdsourcing experience design and a case study of how we have deployed it for the design and development of NatureNet….(More)”

 

These researchers want to turn phones into earthquake detectors


Russell Brandom in TheVerge: “Early warning on earthquakes can help save lives, but many countries can’t afford them. That’s why scientists are turning to another location sensor already widespread in many countries: the smartphone. A single smartphone makes for a crappy earthquake sensor — but get enough of them reporting, and it won’t matter.

A new study, published today in Science Advances, says that the right network of cell phones might be able to substitute for modern seismograph arrays, providing a crucial early warning in the event of a quake. The study looks at historical earthquake data and modern smartphone hardware (based on the Nexus 5) and comes away with a map of how a smartphone-based earthquake detector might work. As it turns out, a phone’s GPS is more powerful than you might think.

A modern phone has almost everything you could want in an earthquake sensor

Early warning systems are designed to pick up the first tremors of an earthquake, projecting where the incoming quake is centered and how strong it’s likely to be. When they work, the systems are able to give citizens and first responders crucial time to prepare for the quake. There are already seismograph-based systems in place in California, Mexico, and Japan, but poorer countries often don’t have the means to implement and maintain them. This new method wouldn’t be as good as most scientific earthquake sensors, but those can cost tens of thousands of dollars each, making a smartphone-based sensor a lot cheaper. For countries that can’t afford a seismograph-based system (which includes much of the Southern Hemisphere), it could make a crucial difference in catching quakes early.

A modern phone has almost everything you could want in an earthquake sensor: specifically, a GPS-powered location sensor, an accelerometer, and multiple data connections. There are also a lot of them, even in poor countries, so a distributed system could count on getting data points from multiple angles….(More)”

The big medical data miss: challenges in establishing an open medical resource


Eric J. Topol in Nature: ” I call for an international open medical resource to provide a database for every individual’s genomic, metabolomic, microbiomic, epigenomic and clinical information. This resource is needed in order to facilitate genetic diagnoses and transform medical care.

“We are each, in effect, one-person clinical trials”

Laurie Becklund was a noted journalist who died in February 2015 at age 66 from breast cancer. Soon thereafter, the Los Angeles Times published her op-ed entitled “As I lay dying” (Ref. 1). She lamented, “We are each, in effect, one-person clinical trials. Yet the knowledge generated from those trials will die with us because there is no comprehensive database of metastatic breast cancer patients, their characteristics and what treatments did and didn’t help them”. She went on to assert that, in the era of big data, the lack of such a resource is “criminal”, and she is absolutely right….

Around the same time of this important op-ed, the MIT Technology Review published their issue entitled “10 Breakthrough Technologies 2015” and on the list was the “Internet of DNA” (Ref. 2). While we are often reminded that the world we live in is becoming the “Internet of Things”, I have not seen this terminology applied to DNA before. The article on the “Internet of DNA” decried, “the unfolding calamity in genomics is that a great deal of life-saving information, though already collected, is inaccessible”. It called for a global network of millions of genomes and cited theMatchmaker Exchange as a frontrunner. For this international initiative, a growing number of research and clinical teams have come together to pool and exchange phenotypic and genotypic data for individual patients with rare disorders, in order to share this information and assist in the molecular diagnosis of individuals with rare diseases….

an Internet of DNA — or what I have referred to as a massive, open, online medicine resource (MOOM) — would help to quickly identify the genetic cause of the disorder4 and, in the process of doing so, precious guidance for prevention, if necessary, would become available for such families who are currently left in the lurch as to their risk of suddenly dying.

So why aren’t such MOOMs being assembled? ….

There has also been much discussion related to privacy concerns that patients might be unwilling to participate in a massive medical information resource. However, multiple global consumer surveys have shown that more than 80% of individuals are ready to share their medical data provided that they are anonymized and their privacy maximally assured4. Indeed, just 24 hours into Apple’s ResearchKit initiative, a smartphone-based medical research programme, there were tens of thousand of patients with Parkinson disease, asthma or heart disease who had signed on. Some individuals are even willing to be “open source” — that is, to make their genetic and clinical data fully available with free access online, without any assurance of privacy. This willingness is seen by the participants in the recently launched Open Humans initiative. Along with the Personal Genome Project, Go Viral and American Gut have joined in this initiative. Still, studies suggest that most individuals would only agree to be medical research participants if their identities would not be attainable. Unfortunately, to date, little has been done to protect individual medical privacy, for which there are both promising new data protection technological approaches4 and the need for additional governmental legislation.

This leaves us with perhaps the major obstacle that is holding back the development of MOOMs — researchers. Even with big, team science research projects culling together hundreds of investigators and institutions throughout the world, such as the Global Alliance for Genomics and Health (GA4GH), the data obtained clinically are just as Laurie Becklund asserted in her op-ed — “one-person clinical trials” (Ref. 1). While undertaking the construction of a MOOM is a huge endeavour, there is little motivation for researchers to take on this task, as this currently offers no academic credit and has no funding source. But the transformative potential of MOOMs to improve medical care is extraordinary. Rather than having the knowledge die with each of us, the time has come to take down the walls of academic medical centres and health-care systems around the world, and create a global knowledge medical resource that leverages each individual’s information to help one another…(More)”