Linux Foundation Collaboration Gets Biological


eWeek: “The Linux Foundation is growing its roster of collaboration projects by expanding from the physical into the biological realm with the OpenBEL (Biological Expression Language). The Linux Foundation, best known as the organization that helps bring Linux vendors and developers together, is also growing its expertise as a facilitator for collaborative development projects…
OpenBEL got its start in June 2012 after being open-sourced by biotech firm Selventa. The effort now includes the participation of Foundation Medicine, AstraZeneca,The Fraunhofer Institute, Harvard Medical School, Novartis, Pfizer and the University of California at San Diego.
BEL offers researchers a language to clearly express scientific findings from the life sciences in a format that can be understood by computing infrastructure…..
The Linux Foundation currently hosts a number of different collaboration projects, including the Xen virtualization project, the OpenDaylight software-defined networking effort, Tizen for mobile phone development, and OpenMAMA for financial services information, among others.
The OpenBEL project will be similar to existing collaboration projects in that the contributors to the project want to accelerate their work through collaborative development, McPherson explained.”

Re3 StoryHack


re3 – A hackathon for storytellers with a conscience. October 4-6, NYC: :Crucial stories deserve top talent. We know you are really busy. We know you are really talented. We know you (might) find the typical hackathon a bit chaotic 
React to the world around you. Rethink the way important issues are communicated. Resolve confusion about some of today’s greatest challenges.The Re3 StoryHack is an experiment, but damn, it’s an important one. It’s time to expand the boundaries of justice-focused persuasive storytelling.
We’re asking changemakers to propose specific stories relating to complex issues like economic fairness, climate change, educational opportunity, and many more.
We’re offering creative storytellers the chance to choose one of ten selected stories and work with top-notch teammates. Over the Re3 StoryHack weekend, we will innovate new ways of thinking and communicating these stories in language; written, visualized, performed, coded and more.
An interactive exhibit? Excellent. Flash mob? Sure. App? Why not. Sculpture? Perfect. Print piece? Absolutely. There is absolutely no limitation of medium for the teams’ final outputs as long as they tell captivating, powerful stories.
If you want a bit more clarity about what this is all about, check out our FAQ and these sample stories we’ve put together.
Changemakers and creative storytellers with a conscience, start your engines.”

Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization


Book review by José Luis Cordeiro:  Eric Drexler, popularly known as “the founding father of nanotechnology,” introduced the concept in his seminal 1981 paper in Proceedings of the National Academy of Sciences.
This paper established fundamental principles of molecular engineering and outlined development paths to advanced nanotechnologies.
He popularized the idea of nanotechnology in his 1986 book, Engines of Creation: The Coming Era of Nanotechnology, where he introduced a broad audience to a fundamental technology objective: using machines that work at the molecular scale to structure matter from the bottom up.
He went on to continue his PhD thesis at MIT, under the guidance of AI-pioneer Marvin Minsky, and published it in a modified form as a book in 1992 as Nanosystems: Molecular Machinery, Manufacturing, and Computation.

Drexler’s new book, Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, tells the story of nanotechnology from its small beginnings, then moves quickly towards a big future, explaining what it is and what it is not, and enlightening about what we can do with it for the benefit of humanity.
In his pioneering 1986 book, Engines of Creation, he defined nanotechnology as a potential technology with these features: “manufacturing using machinery based on nanoscale devices, and products built with atomic precision.”
In his 2013 sequel, Radical Abundance, Drexler expands on his prior thinking, corrects many of the misconceptions about nanotechnology, and dismisses fears of dystopian futures replete with malevolent nanobots and gray goo…
His new book clearly identifies nanotechnology with atomically precise manufacturing (APM)…Drexler makes many comparisons between the information revolution and what he now calls the “APM revolution.” What the first did with bits, the second will do with atoms: “Image files today will be joined by product files tomorrow. Today one can produce an image of the Mona Lisa without being able to draw a good circle; tomorrow one will be able to produce a display screen without knowing how to manufacture a wire.”
Civilization, he says, is advancing from a world of scarcity toward a world of abundance — indeed, radical abundance.”

The New Reality of Social Production


Don Peppers on LinkedIn: “…Waze is yet another example of social production, or the increasingly common use of connected people working together to create value with little or no actual economic incentives involved. Instead, social production is based on a completely different set of principles – sharing and giving, rather than trading and selling. It is an important aspect of what some are now calling the “sharing economy,” and systems like Waze are ever more rapidly replacing or supplementing large portions of the commercial economy, as Martha Rogers and I document in our book Extreme Trust.
In the commercial economy, where profit-making entities operate, what you pay for determines what you get. I pay you, and you give me something of value. I may be a customer buying a product or service, or you may be the boss paying my salary, but either way neither of us is volunteering. We are trading our time or money for value in return. In the commercial economy, we all expect to pay for the things we want. When you pay the grocer $6 for a 12-pack of Diet Coke by the can, you don’t begrudge him the money. And you wouldn’t even consider asking the grocer to give you the soda voluntarily, for free – the way a Waze participant voluntarily reports a new hazard for other participants.
An economic system based on money, as ours is, facilitates the efficient division of labor, enabling us to accomplish more and more complex tasks by dividing them into simple components. The end result is that you don’t have to wire your own smartphone together or harvest your own wheat for your morning bagel. The division-of-labor principle has allowed technology to become so complex that none of us today could ever make even the simplest manufactured products all by ourselves.
But because of the very efficient way in which people are now electronically connected, many social production tasks can also be parsed up and allocated bit by bit among assorted different players – just talk to any of the 3.4 million volunteer coders and developers who work on the more than 300,000 different open-source software projects now registered at Sourceforge, for example. Moreover, these tasks are sometimes so complex, diffused, or difficult that accomplishing them with a commercial model just wouldn’t be practical. Imagine what it would have taken for Waze’s organizers to identify and monitor traffic hazards across the nation on their own, for instance. A small army of paid scouts or robotic monitors would have been required, continually updating the system, and the cost would have made the whole project completely unrealistic…”

A Modern Approach to Open Data


at the Sunlight Foundation blog: “Last year, a group of us who work daily with open government data — Josh Tauberer of GovTrack.us, Derek Willis at The New York Times, and myself — decided to stop each building the same basic tools over and over, and start building a foundation we could share.
noun_project_15212
We set up a small home at github.com/unitedstates, and kicked it off with a couple of projects to gather data on the people and work of Congress. Using a mix of automation and curation, they gather basic information from all over the government — THOMAS.gov, the House and Senate, the Congressional Bioguide, GPO’s FDSys, and others — that everyone needs to report, analyze, or build nearly anything to do with Congress.
Once we centralized this work and started maintaining it publicly, we began getting contributions nearly immediately. People educated us on identifiers, fixed typos, and gathered new data. Chris Wilson built an impressive interactive visualization of the Senate’s budget amendments by extending our collector to find and link the text of amendments.
This is an unusual, and occasionally chaotic, model for an open data project. github.com/unitedstates is a neutral space; GitHub’s permissions system allows many of us to share the keys, so no one person or institution controls it. What this means is that while we all benefit from each other’s work, no one is dependent or “downstream” from anyone else. It’s a shared commons in the public domain.
There are a few principles that have helped make the unitedstates project something that’s worth our time:…”

Is Connectivity A Human Right?


Mark Zuckerberg (Facebook): For almost ten years, Facebook has been on a mission to make the world more open and connected. Today we connect more than 1.15 billion people each month, but as we started thinking about connecting the next 5 billion, we realized something important: the vast majority of people in the world don’t have access to the internet.
Today, only 2.7 billion people are online — a little more than one third of the world. That is growing by less than 9% each year, but that’s slow considering how early we are in the internet’s development. Even though projections show most people will get smartphones in the next decade, most people still won’t have data access because the cost of data remains much more expensive than the price of a smartphone.
Below, I’ll share a rough proposal for how we can connect the next 5 billion people, and a rough plan to work together as an industry to get there. We’ll discuss how we can make internet access more affordable by making it more efficient to deliver data, how we can use less data by improving the efficiency of the apps we build and how we can help businesses drive internet access by developing a new model to get people online.
I call this a “rough plan” because, like many long term technology projects, we expect the details to evolve. It may be possible to achieve more than we lay out here, but it may also be more challenging than we predict. The specific technical work will evolve as people contribute better ideas, and we welcome all feedback on how to improve this.
Connecting the world is one of the greatest challenges of our generation. This is just one small step toward achieving that goal. I’m excited to work together to make this a reality.
For the full version, click here.

Strengthening Local Capacity for Data-Driven Decisionmaking


A report by the National Neighborhood Indicators Partnership (NNIP): “A large share of public decisions that shape the fundamental character of American life are made at the local level; for example, decisions about controlling crime, maintaining housing quality, targeting social services, revitalizing low-income neighborhoods, allocating health care, and deploying early childhood programs. Enormous benefits would be gained if a much larger share of these decisions were based on sound data and analysis.
In the mid-1990s, a movement began to address the need for data for local decisionmaking.Civic leaders in several cities funded local groups to start assembling neighborhood and address-level data from multiple local agencies. For the first time, it became possible to track changing neighborhood conditions, using a variety of indicators, year by year between censuses. These new data intermediaries pledged to use their data in practical ways to support policymaking and community building and give priority to the interests of distressed neighborhoods. Their theme was “democratizing data,” which in practice meant making the data accessible to residents and community groups (Sawicki and Craig 1996).

The initial groups that took on this work formed the National Neighborhood Indicators Partnership (NNIP) to further develop these capacities and spread them to other cities. By 2012, NNIP partners were established in 37 cities, and similar capacities were in development in a number of others. The Urban Institute (UI) serves as the secretariat for the network. This report documents a strategic planning process undertaken by NNIP in 2012 and early 2013. The network’s leadership and funders re-examined the NNIP model in the context of 15 years of local partner experiences and the dramatic changes in technology and policy approaches that have occurred over that period. The first three sections explain NNIP functions and institutional structures and examine the potential role for NNIP in advancing the community information field in today’s environment.”

Collaboration In Biology's Century


Todd Sherer, Chief Executive Officer of The Michael J. Fox Foundation for Parkinson’s Research, in Forbes: “he problem is, we all still work in a system that feeds on secrecy and competition. It’s hard enough work just to dream up win/win collaborative structures; getting them off the ground can feel like pushing a boulder up a hill. Yet there is no doubt that the realities of today’s research environment — everything from the accumulation of big data to the ever-shrinking availability of funds — demand new models for collaboration. Call it “collaboration 2.0.”…I share a few recent examples in the hope of increasing the reach of these initiatives, inspiring others like them, and encouraging frank commentary on how they’re working.
Open-Access Data
The successes of collaborations in the traditional sense, coupled with advanced techniques such as genomic sequencing, have yielded masses of data. Consortia of clinical sites around the world are working together to collect and characterize data and biospecimens through standardized methods, leading to ever-larger pools — more like Great Lakes — of data. Study investigators draw their own conclusions, but there is so much more to discover than any individual lab has the bandwidth for….
Crowdsourcing
A great way to grow engagement with resources you’re willing to share? Ask for it. Collaboration 2.0 casts a wide net. We dipped our toe in the crowdsourcing waters earlier this year with our Parkinson’s Data Challenge, which asked anyone interested to download a set of data that had been collected from PD patients and controls using smart phones. …
Cross-Disciplinary Collaboration 2.0
The more we uncover about the interconnectedness and complexity of the human system, the more proof we are gathering that findings and treatments for one disease may provide invaluable insights for others. We’ve seen some really intriguing crosstalk between the Parkinson’s and Alzheimer’s disease research communities recently…
The results should be: More ideas. More discovery. Better health.”
 
 
 

Five myths about big data


Samuel Arbesman, senior scholar at the Ewing Marion Kauffman Foundation and the author of “The Half-Life of Facts” in the Washington Post: “Big data holds the promise of harnessing huge amounts of information to help us better understand the world. But when talking about big data, there’s a tendency to fall into hyperbole. It is what compels contrarians to write such tweets as “Big Data, n.: the belief that any sufficiently large pile of s— contains a pony.” Let’s deflate the hype.
1. “Big data” has a clear definition.
The term “big data” has been in circulation since at least the 1990s, when it is believed to have originated in Silicon Valley. IBM offers a seemingly simple definition: Big data is characterized by the four V’s of volume, variety, velocity and veracity. But the term is thrown around so often, in so many contexts — science, marketing, politics, sports — that its meaning has become vague and ambiguous….
2. Big data is new.
By many accounts, big data exploded onto the scene quite recently. “If wonks were fashionistas, big data would be this season’s hot new color,” a Reuters report quipped last year. In a May 2011 report, the McKinsey Global Institute declared big data “the next frontier for innovation, competition, and productivity.”
It’s true that today we can mine massive amounts of data — textual, social, scientific and otherwise — using complex algorithms and computer power. But big data has been around for a long time. It’s just that exhaustive datasets were more exhausting to compile and study in the days when “computer” meant a person who performed calculations….
3. Big data is revolutionary.
In their new book, “Big Data: A Revolution That Will Transform How We Live, Work, and Think,”Viktor Mayer-Schonberger and Kenneth Cukier compare “the current data deluge” to the transformation brought about by the Gutenberg printing press.
If you want more precise advertising directed toward you, then yes, big data is revolutionary. Generally, though, it’s likely to have a modest and gradual impact on our lives….
4. Bigger data is better.
In science, some admittedly mind-blowing big-data analyses are being done. In business, companies are being told to “embrace big data before your competitors do.” But big data is not automatically better.
Really big datasets can be a mess. Unless researchers and analysts can reduce the number of variables and make the data more manageable, they get quantity without a whole lot of quality. Give me some quality medium data over bad big data any day…
5. Big data means the end of scientific theories.
Chris Anderson argued in a 2008 Wired essay that big data renders the scientific method obsolete: Throw enough data at an advanced machine-learning technique, and all the correlations and relationships will simply jump out. We’ll understand everything.
But you can’t just go fishing for correlations and hope they will explain the world. If you’re not careful, you’ll end up with spurious correlations. Even more important, to contend with the “why” of things, we still need ideas, hypotheses and theories. If you don’t have good questions, your results can be silly and meaningless.
Having more data won’t substitute for thinking hard, recognizing anomalies and exploring deep truths.”