Wired: “I’m no neuroscientist, and yet, here I am at my computer attempting to reconstruct a neural circuit of a mouse’s retina. It’s not quite as difficult and definitely not as boring as it sounds. In fact, it’s actually pretty fun, which is a good thing considering I’m playing a videogame.
Called EyeWire, the browser-based game asks players to map the connections between retinal neurons by coloring in 3-D slices of the brain. Much like any other game out there, being good at EyeWire earns you points, but the difference is that the data you produce during gameplay doesn’t just get you on a leader board—it’s actually used by scientists to build a better picture of the human brain.
Created by neuroscientist Sebastian Seung’s lab at MIT, EyeWire basically gamifies the professional research Seung and his collaborators do on a daily basis. Seung is studying the connectome, the hyper-complex tangle of connections among neurons in the brain.”
Can We Build A Kickstarter For Cancer?
Paul Howard in Forbes: “tarting you own band, writing your first novel, or re-publishing your favorite ‘80s tabletop RPG are all cool goals. You can do them all on Kickstarter. What would be cooler?
How about funding a virtual biotech company with one goal: Saving or extending the life of a cancer patient who doesn’t respond to “standard of care” treatments….
The Cancer Commons approach – a distributed framework for empowering patients and learning from every patient/treatment combination – breaks down traditional distinctions between clinical trials and patient treatment in the “real world.” Instead of developing treatments in a lab and then testing them on randomized patients in clinical trials (designed to benefit future patients), researchers would apply the latest scientific knowledge and tools to help each patient achieve the best possible outcome today based on what we know – or think we can predict – about a molecular subtype of cancer….
We’ll need more than money to power a Kickstarter-for-cancer movement. We’ll need to encourage companies – from Big Pharma to “small” biotechs – to participate in distributed, Bayesian trials where new biomarkers or combinations of biomarkers are tested in patients with particular molecular profiles. And the FDA is going to have to be convinced that the system is going to generate high quality data that benefits patients, not sell them snake-oil cures.
In return for companies making their compound libraries and experimental drugs available for the “virtual biotechs” launched by cancer patients and their families, there should be a regulatory path established to take the most promising drugs and drug combinations to market.”
How to do scientific research without even trying (much)
Ars Technica: “To some extent, scientific research requires expensive or specialized equipment—some work just requires a particle accelerator or a virus containment facility. But plenty of other research has very simple requirements: a decent camera, a bit of patience, or being in the right place at the right time. Since that sort of work is open to anyone, getting the public involved can be a huge win for scientists, who can then obtain much more information than they could have gathered on their own.
A group of Spanish researchers has now written an article that is a mixture of praise for this sort of citizen science, a resource list for people hoping to get involved, and a how-to guide for anyone inspired to join in. The researchers focus on their own area of interest—insects, specifically the hemiptera or “true bugs”—but a lot of what they say applies to other areas of research.
…
The paper also lists a variety of regional-specific sites that focus on insect identification and tracking, such as ones for the UK, Belgium, and Slovenia. But a dedicated system isn’t required for this sort of resource. In the researchers’ home base on the Iberian Peninsula, insects are tracked via a Flickr group. (If you’re interested in insect research and based in the US, you can also find dozens of projects at the SciStarter site.) We’ve uploaded some of the most amazing images into a gallery that accompanies this article.
ZooKeys, 2013. DOI: 10.3897/zookeys.319.4342 “
The Power of Hackathons
Woodrow Wilson International Center for Scholars: “The Commons Lab of the Science and Technology Innovation Program is proud to announce the release of The Power of Hackathons: A Roadmap for Sustainable Open Innovation. Hackathons are collaborative events that have long been part of programmer culture, where people gather in person, online or both to work together on a problem. This could involve creating an application, improving an existing one or testing a platform.
In recent years, government agencies at multiple levels have started holding hackathon events of their own. For this brief, author Zachary Bastian interviewed agency staff, hackathon planners and hackathon participants to better understand how these events can be structured. The fundamental lesson was that a hackathon is not a panacea, but instead should be part of a broader open data and innovation centric strategy.
The full brief can be found here”
Why you should never trust a data visualisation
John Burn-Murdoch in The Guardian: “An excellent blogpost has been receiving a lot of attention over the last week. Pete Warden, an experienced data scientist and author for O’Reilly on all things data, writes:
The wonderful thing about being a data scientist is that I get all of the credibility of genuine science, with none of the irritating peer review or reproducibility worries … I thought I was publishing an entertaining view of some data I’d extracted, but it was treated like a scientific study.
This is an important acknowledgement of a very real problem, but in my view Warden has the wrong target in his crosshairs. Data presented in any medium is a powerful tool and must be used responsibly, but it is when information is expressed visually that the risks are highest.
The central example Warden uses is his visualisation of Facebook friend networks across the United States, which proved extremely popular and was even cited in the New York Times as evidence for growing social division.
As he explains in his post, the methodology behind his underlying network graph is perfectly defensible, but the subsequent clustering process was “produced by me squinting at all the lines, coloring in some areas that seemed more connected in a paint program, and picking silly names for the areas”. The exercise was only ever intended as a bit of fun with a large and interesting dataset, so there really shouldn’t be any problem here.
But there is: humans are visual creatures. Peer-reviewed studies have shown that we can consume information more quickly when it is expressed in diagrams than when it is presented as text.
Even something as simple as colour scheme can have a marked impact on the perceived credibility of information presented visually – often a considerably more marked impact than the actual authority of the data source.
Another great example of this phenomenon was the Washington Post’s ‘map of the world’s most and least racially tolerant countries‘, which went viral back in May of this year. It was widely accepted as an objective, scientific piece of work, despite a number of social scientists identifying flaws in the methodology and the underlying data itself.”
Data Science for Social Good
Data Science for Social Good: “By analyzing data from police reports to website clicks to sensor signals, governments are starting to spot problems in real-time and design programs to maximize impact. More nonprofits are measuring whether or not they’re helping people, and experimenting to find interventions that work.
None of this is inevitable, however.
We’re just realizing the potential of using data for social impact and face several hurdles to it’s widespread adoption:
- Most governments and nonprofits simply don’t know what’s possible yet. They have data – but often not enough and maybe not the right kind.
- There are too few data scientists out there – and too many spending their days optimizing ads instead of bettering lives.
To make an impact, we need to show social good organizations the power of data and analytics. We need to work on analytics projects that have high social impact. And we need to expose data scientists to the problems that really matter.
The fellowship
That’s exactly why we’re doing the Eric and Wendy Schmidt Data Science for Social Good summer fellowship at the University of Chicago.
We want to bring three dozen aspiring data scientists to Chicago, and have them work on data science projects with social impact.
Working closely with governments and nonprofits, fellows will take on real-world problems in education, health, energy, transportation, and more.
Over the next three months, they’ll apply their coding, machine learning, and quantitative skills, collaborate in a fast-paced atmosphere, and learn from mentors in industry, academia, and the Obama campaign.
The program is led by a strong interdisciplinary team from the Computation institute and the Harris School of Public Policy at the University of Chicago.”
‘Medical Instagram’ helps build a library of reference photos for doctors
Springwise: “The power of the visual sharing that makes platforms such as Instagram so popular has been harnessed by retailers like Ask CT Food to share knowledge about cooking, but could the same be done for the medical world? Figure1 enables health professionals to upload and share photos of conditions, creating online discussion as well as crowdsourcing a database of reference images.
Developed by healthcare tech startup Movable Science, the platform is designed in a similar vein to Instagram and enables medical professionals to create their own feed of images from the cases they deal with. In order to protect patients’ identities, the app uses facial recognition to block out faces, while users can add their own marks to cover up other indentifiable marks. They can also add pointers and annotations, as well as choosing who sees it, before uploading the image. Photos can be tagged with relevant terms to allow the community to easily find them through search and others can comment on the images, fostering discussion among users. Images can also be starred, which acts simultaneously as an indication of quality as well as enabling users to save useful images for later reference. …
Although Instagram was developed with the broad purpose of entertainment and social sharing, Figure1 has tweaked the platform’s functions to provide a tool that could help doctors and students share their knowledge and learn from others in an engaging way…”
Let’s Shake Up the Social Sciences
Nicholas Christakis in The New York Times:”TWENTY-FIVE years ago, when I was a graduate student, there were departments of natural science that no longer exist today. Departments of anatomy, histology, biochemistry and physiology have disappeared, replaced by innovative departments of stem-cell biology, systems biology, neurobiology and molecular biophysics. Taking a page from Darwin, the natural sciences are evolving with the times. The perfection of cloning techniques gave rise to stem-cell biology; advances in computer science contributed to systems biology. Whole new fields of inquiry, as well as university departments and majors, owe their existence to fresh discoveries and novel tools.
In contrast, the social sciences have stagnated. They offer essentially the same set of academic departments and disciplines that they have for nearly 100 years: sociology, economics, anthropology, psychology and political science. This is not only boring but also counterproductive, constraining engagement with the scientific cutting edge and stifling the creation of new and useful knowledge. Such inertia reflects an unnecessary insecurity and conservatism, and helps explain why the social sciences don’t enjoy the same prestige as the natural sciences.
One reason citizens, politicians and university donors sometimes lack confidence in the social sciences is that social scientists too often miss the chance to declare victory and move on to new frontiers. Like natural scientists, they should be able to say, “We have figured this topic out to a reasonable degree of certainty, and we are now moving our attention to more exciting areas.” But they do not.”
Transforming Our Conversation of Information Architecture with Structure
Nathaniel Davis: Information architecture has been characterized as both an art and a science. Because there’s more evidence of the former than the latter, the academic and research community is justified in hesitating to give the practice of information architecture more attention.
If you probe the history of information architecture for the web, its foundation appears to be rooted in library science. But you’ll also find a pattern of borrowing methods and models from many other disciplines like architecture and urban planning, linguistics and ethnography, cognition and psychology, to name a few. This history leads many to wonder if the practice of information architecture is anything other than an art of induction for solving problems of architecture and design for the web…
Certainly, there is one concept that has persisted under the radar for many years with limited exploration. It is littered throughout countless articles, books and papers and is present in the most cited IA practice definitions. It may be the single concept that truly bridges practitioner and academic interests around a central and worthwhile topic. That concept is structure.”
Crowdsourcing—Harnessing the Masses to Advance Health and Medicine
A Systematic Review of the literature in the Journal of General Internal Medicine: “Crowdsourcing research allows investigators to engage thousands of people to provide either data or data analysis. However, prior work has not documented the use of crowdsourcing in health and medical research. We sought to systematically review the literature to describe the scope of crowdsourcing in health research and to create a taxonomy to characterize past uses of this methodology for health and medical research..
Twenty-one health-related studies utilizing crowdsourcing met eligibility criteria. Four distinct types of crowdsourcing tasks were identified: problem solving, data processing, surveillance/monitoring, and surveying. …
Utilizing crowdsourcing can improve the quality, cost, and speed of a research project while engaging large segments of the public and creating novel science. Standardized guidelines are needed on crowdsourcing metrics that should be collected and reported to provide clarity and comparability in methods.”