Finding Collaborators: Toward Interactive Discovery Tools for Research Network Systems


New paper by Charles D Borromeo, Titus K Schleyer, Michael J Becich, and Harry Hochheiser: “Background: Research networking systems hold great promise for helping biomedical scientists identify collaborators with the expertise needed to build interdisciplinary teams. Although efforts to date have focused primarily on collecting and aggregating information, less attention has been paid to the design of end-user tools for using these collections to identify collaborators. To be effective, collaborator search tools must provide researchers with easy access to information relevant to their collaboration needs.
Objective: The aim was to study user requirements and preferences for research networking system collaborator search tools and to design and evaluate a functional prototype.
Methods: Paper prototypes exploring possible interface designs were presented to 18 participants in semistructured interviews aimed at eliciting collaborator search needs. Interview data were coded and analyzed to identify recurrent themes and related software requirements. Analysis results and elements from paper prototypes were used to design a Web-based prototype using the D3 JavaScript library and VIVO data. Preliminary usability studies asked 20 participants to use the tool and to provide feedback through semistructured interviews and completion of the System Usability Scale (SUS).
Results: Initial interviews identified consensus regarding several novel requirements for collaborator search tools, including chronological display of publication and research funding information, the need for conjunctive keyword searches, and tools for tracking candidate collaborators. Participant responses were positive (SUS score: mean 76.4%, SD 13.9). Opportunities for improving the interface design were identified.
Conclusions: Interactive, timeline-based displays that support comparison of researcher productivity in funding and publication have the potential to effectively support searching for collaborators. Further refinement and longitudinal studies may be needed to better understand the implications of collaborator search tools for researcher workflows.”

How Wikipedia Data Is Revolutionizing Flu Forecasting


They say their model has the potential to transform flu forecasting from a black art to a modern science as well-founded as weather forecasting.
Flu takes between 3,000 and 49,000 lives each year in the U.S. so an accurate forecast can have a significant impact on the way society prepares for the epidemic. The current method of monitoring flu outbreaks is somewhat antiquated. It relies on a voluntary system in which public health officials report the percentage of patients they see each week with influenza-like illnesses. This is defined as the percentage of people with a temperature higher than 100 degrees, a cough and no other explanation other than flu.
These numbers give a sense of the incidence of flu at any instant but the accuracy is clearly limited. They do not, for example, account for people with flu who do not seek treatment or people with flu-like symptoms who seek treatment but do not have flu.
There is another significant problem. The network that reports this data is relatively slow. It takes about two weeks for the numbers to filter through the system so the data is always weeks old.
That’s why the CDC is interested in finding new ways to monitor the spread of flu in real time. Google, in particular, has used the number of searches for flu and flu-like symptoms to forecast flu in various parts of the world. That approach has had considerable success but also some puzzling failures. One problem, however, is that Google does not make its data freely available and this lack of transparency is a potential source of trouble for this kind of research.
So Hickmann and co have turned to Wikipedia. Their idea is that the variation in numbers of people accessing articles about flu is an indicator of the spread of the disease. And since Wikipedia makes this data freely available to any interested party, it is an entirely transparent source that is likely to be available for the foreseeable future….
Ref: arxiv.org/abs/1410.7716 : Forecasting the 2013–2014 Influenza Season using Wikipedia”

The New Thing in Google Flu Trends Is Traditional Data


in the New York Times: “Google is giving its Flu Trends service an overhaul — “a brand new engine,” as it announced in a blog post on Friday.

The new thing is actually traditional data from the Centers for Disease Control and Prevention that is being integrated into the Google flu-tracking model. The goal is greater accuracy after the Google service had been criticized for consistently over-estimating flu outbreaks in recent years.

The main critique came in an analysis done by four quantitative social scientists, published earlier this year in an article in Science magazine, “The Parable of Google Flu: Traps in Big Data Analysis.” The researchers found that the most accurate flu predictor was a data mash-up that combined Google Flu Trends, which monitored flu-related search terms, with the official C.D.C. reports from doctors on influenza-like illness.

The Google Flu Trends team is heeding that advice. In the blog post, written by Christian Stefansen, a Google senior software engineer, wrote, “We’re launching a new Flu Trends model in the United States that — like many of the best performing methods in the literature — takes official CDC flu data into account as the flu season progresses.”

Google’s flu-tracking service has had its ups and downs. Its triumph came in 2009, when it gave an advance signal of the severity of the H1N1 outbreak, two weeks or so ahead of official statistics. In a 2009 article in Nature explaining how Google Flu Trends worked, the company’s researchers did, as the Friday post notes, say that the Google service was not intended to replace official flu surveillance methods and that it was susceptible to “false alerts” — anything that might prompt a surge in flu-related search queries.

Yet those caveats came a couple of pages into the Nature article. And Google Flu Trends became a symbol of the superiority of the new, big data approach — computer algorithms mining data trails for collective intelligence in real time. To enthusiasts, it seemed so superior to the antiquated method of collecting health data that involved doctors talking to patients, inspecting them and filing reports.

But Google’s flu service greatly overestimated the number of cases in the United States in the 2012-13 flu season — a well-known miss — and, according to the research published this year, has persistently overstated flu cases over the years. In the Science article, the social scientists called it “big data hubris.”

NASA Launches New Citizen Science Website


 

NASASolveRobert McNamara  at Commons Lab:
 
NASASolve debuted last month as a one-stop-shop for prizes and challenges that are seeking contributions from people like you. Don’t worry you need not be a rocket scientist to apply. The general public is encouraged to contribute to solving a variety of challenges facing NASA in reaching its mission goals. From hunting asteroids to re-designing Balance Mass for the Mars Lander, there are multitudes of ways for you to be a part of the nation’s space program.
Crowdsourcing the public for innovative solutions is something that NASA has been engaged in since 2005.  But as NASA’s chief technologist points out, “NASASolve is a great way for members of the public and other citizen scientists to see all NASA prizes and challenges in one location.”  The new site hopes to build on past successes like the Astronaut Glove Challenge, the ISS Longeron Challenge and the Zero Robotics Video Challenge. “Challenges are one tool to tap the top talent and best ideas. Partnering with the community to get ideas and solutions is important for NASA moving forward,” says Jennifer Gustetic, Program Executive of NASA Prizes and Challenges.
In order to encourage more active public participation, millions of dollars and scholarships have been set aside to reward those whose ideas and solutions succeed in taking on NASA’s challenges. If you want to get involved, visit NASASolve for more information and the current list of challenges waiting for solutions….

Tell Everyone: Why We Share & Why It Matters


Book review by Tim Currie: “Were the people sharing these stories outraged by Doug Ford’s use of an ethnic stereotype? Joyfully amused at the ongoing campaign gaffes? Or saddened by the state of public discourse at a democratic forum? All of these emotions likely played a part in driving social shares. But a growing body of research suggests some emotions are more influential than others.
Alfred Hermida’s new book, Tell Everyone: Why We Share & Why It Matters, takes us through that research—and a pile more, from Pew Center data on the makeup of our friends lists to a Yahoo! study on the nature of social influencers. One of Hermida’s accomplishments is to have woven that research into a breezy narrative crammed with examples from recent headlines.
Not up on the concept of cognitive dissonance? Homophily? Pluralistic ignorance? Or situational awareness? Not a deal breaker. Just in time for Halloween, Tell Everyone (Doubleday Canada) is a social science literature review masquerading as light bedside reading from the business management section. Hermida has tucked the academic sourcing into 21 pages of endnotes and offered a highly readable 217-page tour of social movements, revolutions, journalistic gaffes and corporate PR disasters.
The UBC journalism professor moves easily from chronicling the activities of Boston Marathon Redditors to Tahrir Square YouTubers to Japanese earthquake tweeters. He dips frequently into the past for context, highlighting the roles of French Revolution-era salon “bloggers,” 18th-century Portuguese earthquake pamphleteers and First World War German pilots.
Indeed, this book is only marginally about journalism, made clear by the absence of a reference to “news” in its title. It is at least as much about sociology and marketing.
Mathew Ingram argued recently that journalism’s biggest competitors don’t look like journalism. Hermida would no doubt agree. The Daily Show’s blurring of comedy and journalism is now a familiar ingredient in people’s information diet, he writes. And with nearly every news event, “the reporting by journalists sits alongside the accounts, experiences, opinions and hopes of millions of others.” Journalistic accounts didn’t define Mitt Romney’s 2012 U.S. presidential campaign, he notes; thousands of users did, with their “binders full of women” meme.
Hermida devotes a chapter to chronicling the ways in which consumers are asserting themselves in the marketplace—and the ways in which brands are reacting. The communications team at Domino’s Pizza failed to engage YouTube users over a gross gag video made by two of its employees in 2009. But Lionsgate films effectively incorporated user-generated content into its promotions for the 2012 Hunger Games movie. Some of the examples are well known but their value lies in the considerable context Hermida provides.
Other chapters highlight the role of social media in the wake of natural disasters and how users—and researchers—are working to identify hoaxes.
Tell Everyone is the latest in a small but growing number of mass-market books aiming to distill social media research from the ivory tower. The most notable is Wharton School professor Jonah Berger’s 2013 book Contagious: Why Things Catch On. Hermida discusses the influential 2009 research conducted by Berger and his colleague Katherine Milkman into stories on the New York Times most-emailed list. Those conclusions now greatly influence the work of social media editors.
But, in this instance at least, the lively pacing of the book sacrifices some valuable detail.
Hermida explores the studies’ main conclusion: positive content is more viral than negative content, but the key is the presence of activating emotions in the user, such as joy or anger. However, the chapter gives only a cursory mention to a finding Berger discusses at length in Contagious—the surprisingly frequent presence of science stories in the list of most-emailed articles. The emotion at play is awe—what Berger characterizes as not quite joy, but a complex sense of surprise, unexpectedness or mystery. It’s an important aspect of our still-evolving understanding of how we use social media….”

“Open” disclosure of innovations, incentives and follow-on reuse: Theory on processes of cumulative innovation and a field experiment in computational biology


Paper by Kevin J. Boudreau and Karim R. Lakhani: “Most of society’s innovation systems – academic science, the patent system, open source, etc. – are “open” in the sense that they are designed to facilitate knowledge disclosure among innovators. An essential difference across innovation systems is whether disclosure is of intermediate progress and solutions or of completed innovations. We theorize and present experimental evidence linking intermediate versus final disclosure to an ‘incentives-versus-reuse’ tradeoff and to a transformation of the innovation search process. We find intermediate disclosure has the advantage of efficiently steering development towards improving existing solution approaches, but also has the effect of limiting experimentation and narrowing technological search. We discuss the comparative advantages of intermediate versus final disclosure policies in fostering innovation.”
 

Quantifying the Livable City


Brian Libby at City Lab: “By the time Constantine Kontokosta got involved with New York City’s Hudson Yards development, it was already on track to be historically big and ambitious.
 
Over the course of the next decade, developers from New York’s Related Companies and Canada-based Oxford Properties Group are building the largest real-estate development in United States history: a 28-acre neighborhood on Manhattan’s far West Side over a Long Island Rail Road yard, with some 17 million square feet of new commercial, residential, and retail space.
Hudson Yards is also being planned as an innovative model of efficiency. Its waste management systems, for example, will utilize a vast vacuum-tube system to collect garbage from each building into a central terminal, meaning no loud garbage trucks traversing the streets by night. Onsite power generation will prevent blackouts like those during Hurricane Sandy, and buildings will be connected through a micro-grid that allows them to share power with each other.
Yet it was Kontokosta, the deputy director of academics at New York University’s Center for Urban Science and Progress (CUSP), who conceived of Hudson Yards as what is now being called the nation’s first “quantified community.” This entails an unprecedentedly wide array of data being collected—not just on energy and water consumption, but real-time greenhouse gas emissions and airborne pollutants, measured with tools like hyper-spectral imagery.

New York has led the way in recent years with its urban data collection. In 2009, Mayor Michael Bloomberg signed Local Law 84, which requires privately owned buildings over 50,000 square feet in size to provide annual benchmark reports on their energy and water use. Unlike a LEED rating or similar, which declares a building green when it opens, the city benchmarking is a continuous assessment of its operations…”

Open Access Button


About the Open Access Button: “The key functions of the Open Access Button are finding free research, making more research available and also advocacy. Here’s how each works.

Finding free papers

Research published in journals that require you to pay to read can sometimes be accessed free in other places. These other copies are often very similar to the published version, but may lack nice formatting or be a version prior to peer review. These copies can be found in research repositories, on authors websites and many other places because they’re archived. To find these versions we identify the paper a user needs and effectively search on Google Scholar and CORE to find these copies and link them to the users.

Making more research, or information about papers available

If a free copy isn’t available we aim to make one. This is not a simple task and so we have to use a few different innovative strategies. First, we email the author of the research and ask them to make a copy of the research available – once they do this we’ll send it to everyone who needs it. Second, we create pages for each paper needed which, if shared, viewed, and linked to an author could see and provide their paper on. Third, we’re building ways to find associated information about a paper such as the facts contained, comments from people who’ve read it, related information and lay summaries.

Advocacy

Unfortunately the Open Access Button can only do so much, and isn’t a perfect or long term solution to this problem. The data and stories collected by the Button are used to help make the changes required to really solve this issue. We also support campaigns and grassroots advocates with this at openaccessbutton.org/action..”

The government wants to study ‘social pollution’ on Twitter


in the Washington Post: “If you take to Twitter to express your views on a hot-button issue, does the government have an interest in deciding whether you are spreading “misinformation’’? If you tweet your support for a candidate in the November elections, should taxpayer money be used to monitor your speech and evaluate your “partisanship’’?

My guess is that most Americans would answer those questions with a resounding no. But the federal government seems to disagree. The National Science Foundation , a federal agency whose mission is to “promote the progress of science; to advance the national health, prosperity and welfare; and to secure the national defense,” is funding a project to collect and analyze your Twitter data.
The project is being developed by researchers at Indiana University, and its purported aim is to detect what they deem “social pollution” and to study what they call “social epidemics,” including how memes — ideas that spread throughout pop culture — propagate. What types of social pollution are they targeting? “Political smears,” so-called “astroturfing” and other forms of “misinformation.”
Named “Truthy,” after a term coined by TV host Stephen Colbert, the project claims to use a “sophisticated combination of text and data mining, social network analysis, and complex network models” to distinguish between memes that arise in an “organic manner” and those that are manipulated into being.

But there’s much more to the story. Focusing in particular on political speech, Truthy keeps track of which Twitter accounts are using hashtags such as #teaparty and #dems. It estimates users’ “partisanship.” It invites feedback on whether specific Twitter users, such as the Drudge Report, are “truthy” or “spamming.” And it evaluates whether accounts are expressing “positive” or “negative” sentiments toward other users or memes…”

Tackling Wicked Government Problems


Book by Jackson Nickerson and Ronald Sanders: “How can government leaders build, sustain, and leverage the cross-organizational collaborative networks needed to tackle the complex interagency and intergovernmental challenges they increasingly face? Tackling Wicked Government Problems: A Practical Guide for Developing Enterprise Leaders draws on the experiences of high-level government leaders to describe and comprehensively articulate the complicated, ill-structured difficulties they face—often referred to as “wicked problems”—in leading across organizational boundaries and offers the best strategies for addressing them.
Tackling Wicked Government Problems explores how enterprise leaders use networks of trusted, collaborative relationships to respond and lead solutions to problems that span agencies. It also offers several approaches for translating social network theory into practical approaches for these leaders to build and leverage boundary-spanning collaborative networks and achieve real mission results.
Finally, past and present government executives offer strategies for systematically developing enterprise leaders. Taken together, these essays provide a way forward for a new cadre of officials better equipped to tackle government’s twenty-first-century wicked challenges”