Competition-Based Innovation: The Case of the X Prize Foundation


Paper by Hossain, Mokter and Kauranen, Ilkka, in the Journal of Organization Design,/SSRN: “The use of competition-based processes for the development of innovations is increasing. In parallel with the increasing use of competition-based innovation in business firms, this model of innovation is successfully being used by non-profit organizations for advancing the development of science and technology. One such non-profit organization is the X Prize Foundation, which designs and manages innovation competitions to encourage scientific and technological development. The objective of this article is to analyze the X Prize Foundation and three of the competitions it has organized in order to identify the challenges of competition-based innovation and how to overcome them….(More)”.
 

Doing Social Network Research: Network-based Research Design for Social Scientists


New book by Garry Robins: “Are you struggling to design your social network research? Are you looking for a book that covers more than social network analysis? If so, this is the book for you! With straight-forward guidance on research design and data collection, as well as social network analysis, this book takes you start to finish through the whole process of doing network research. Open the book and you’ll find practical, ‘how to’ advice and worked examples relevant to PhD students and researchers from across the social and behavioural sciences. The book covers:

  • Fundamental network concepts and theories
  • Research questions and study design
  • Social systems and data structures
  • Network observation and measurement
  • Methods for data collection
  • Ethical issues for social network research
  • Network visualization
  • Methods for social network analysis
  • Drawing conclusions from social network results

This is a perfect guide for all students and researchers looking to do empirical social network research…(More)”

The Cobweb: Can the Internet be archived?


in The New Yorker: “….The average life of a Web page is about a hundred days. ….Web pages don’t have to be deliberately deleted to disappear. Sites hosted by corporations tend to die with their hosts. When MySpace, GeoCities, and Friendster were reconfigured or sold, millions of accounts vanished. …
The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.
The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?… (More)”.

New Journal: Citizen Science: Theory and Practice


“Citizen Science: Theory and Practice is an open-access, peer-reviewed journal published by Ubiquity Press on behalf of the Citizen Science Association. It focuses on advancing the field of citizen science by providing a venue for citizen science researchers and practitioners – scientists, information technologists, conservation biologists, community health organizers, educators, evaluators, urban planners, and more – to share best practices in conceiving, developing, implementing, evaluating, and sustaining projects that facilitate public participation in scientific endeavors in any discipline.”

Why Some Teams Are Smarter Than Others


Article by Anita Woolley,  Thomas W. Malone and Christopher Chabris in The New York Times: “…Psychologists have known for a century that individuals vary in their cognitive ability. But are some groups, like some people, reliably smarter than others?

Working with several colleagues and students, we set out to answer that question. In our first two studies, which we published with Alex Pentland and Nada Hashmi of M.I.T. in 2010 in the journal Science, we grouped 697 volunteer participants into teams of two to five members….

Instead, the smartest teams were distinguished by three characteristics.

First, their members contributed more equally to the team’s discussions, rather than letting one or two people dominate the group.

Second, their members scored higher on a test called Reading the Mind in the Eyes, which measures how well people can read complex emotional states from images of faces with only the eyes visible.

Finally, teams with more women outperformed teams with more men. Indeed, it appeared that it was not “diversity” (having equal numbers of men and women) that mattered for a team’s intelligence, but simply having more women. This last effect, however, was partly explained by the fact that women, on average, were better at “mindreading” than men.

In a new study that we published with David Engel and Lisa X. Jing of M.I.T…(More)”

Coop’s Citizen Sci Scoop: Try it, you might like it


Response by Caren Cooper at PLOS: “Margaret Mead, the world-famous anthropologist said, “never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”
The sentiment rings true for citizen science.
Yet, recent news in the citizen science world has been headlined “Most participants in citizen science projects give up almost immediately.” This was based on a study of participation in seven different projects within the crowdsourcing hub called Zooniverse. Most participants tried a project once, very briefly, and never returned.
What’s unusual about Zooniverse projects is not the high turnover of quitters. Rather, it’s unusual that even early quitters do some important work. That’s a cleverly designed project. An ethical principle of Zooniverse is to not waste people’s time. The crowdsourcing tasks are pivotal to advancing research. They cannot be accomplished by computer algorithms or machines. They require crowds of people, each chipping in a tiny bit. What is remarkable is that the quitters matter at all….
An Internet rule of thumb in that only 1% (or less) of users add new content to sites like Wikipedia. Citizen science appears to operate on this dynamic, except instead of a core group adding existing knowledge for the crowd to use, a core group is involved in making new knowledge for the crowd to use….
In citizen science, a crowd can be four or a crowd can be hundreds of thousands. A citizen scientist is not a person who will participate in any project. They are individuals – gamers, birders, stargazers, gardeners, weather bugs, hikers, naturalists, and more – with particular interests and motivations.
As my grandfather said, “Try it, you might like it.” It’s fabulous that millions are trying it. Sooner or later, when participants and projects find one another, a good match translates into a job well done….(More)”.

Motivations for sustained participation in crowdsourcing: The role of talk in a citizen science case study


Paper by CB. Jackson, C. Østerlund, G. Mugar, KDV. Hassman for the Proceedings of the Forty-eighth Hawai’i International Conference on System Science (HICSS-48): “The paper explores the motivations of volunteers in a large crowdsourcing project and contributes to our understanding of the motivational factors that lead to deeper engagement beyond initial participation. Drawing on the theory of legitimate peripheral participation (LPP) and the literature on motivation in crowdsourcing, we analyze interview and trace data from a large citizen science project. The analyses identify ways in which the technical features of the projects may serve as motivational factors leading participants towards sustained participation. The results suggest volunteers first engage in activities to support knowledge acquisition and later share knowledge with other volunteers and finally increase participation in Talk through a punctuated process of role discovery…(More)”

.

New open access journal will publish across all disciplines


Claudia Lupp at Elsevier: “When it comes to publishing, there is no one-size-fits-all approach or format. In years gone by, getting published was largely limited to presenting research in a specialized field. But with the vast increase in research output – and more and more researchers collaborating across borders and disciplines – things are changing rapidly. While there is still a vital role for the traditional field-specific journal, researchers want more choices of where and how to publish their research. Journals that feature sound research across all disciplines significantly broaden those much-coveted publishing options.
To expand and refine that concept even further, Elsevier is preparing to collaborate with the research community to develop an open access journal covering all disciplines on a platform that will enable continual experimentation and innovation. Plans include improving the end-to-end publishing process and integrating our smart technologies to improve search and discovery.
The new journal will offer researchers a streamlined, simple and intuitive publishing platform that connects their research to the relevant communities. Articles will be assessed for sound research rather than their scope or impact….
We are building an online interface that provides authors with a step-by-step, quick and intuitive submission process. As part of a transparent publishing process, we will alert authors on the progress of their submitted papers at each stage.
To streamline the editorial process, we plan to use assets and technology developed by Elsevier. For example, by using data from Scopus and the technology behind it, we can quickly match papers to relevant editors and reviewers, significantly shortening peer review times….
Once papers have been reviewed, edited and published, the goal is to bring this vast amount of information to readers and help them make sense of it for their own research. Every reputable journal aims to publish papers that are accurate and disseminate them to the right reader to support the advancement of science. But how do you do that effectively when there are more researchers and research papers than ever before?… (More)”

Businesses dig for treasure in open data


Lindsay Clark in ComputerWeekly: “Open data, a movement which promises access to vast swaths of information held by public bodies, has started getting its hands dirty, or rather its feet.
Before a spade goes in the ground, construction and civil engineering projects face a great unknown: what is down there? In the UK, should someone discover anything of archaeological importance, a project can be halted – sometimes for months – while researchers study the site and remove artefacts….
During an open innovation day hosted by the Science and Technologies Facilities Council (STFC), open data services and technology firm Democrata proposed analytics could predict the likelihood of unearthing an archaeological find in any given location. This would help developers understand the likely risks to construction and would assist archaeologists in targeting digs more accurately. The idea was inspired by a presentation from the Archaeological Data Service in the UK at the event in June 2014.
The proposal won support from the STFC which, together with IBM, provided a nine-strong development team and access to the Hartree Centre’s supercomputer – a 131,000 core high-performance facility. For natural language processing of historic documents, the system uses two components of IBM’s Watson – the AI service which famously won the US TV quiz show Jeopardy. The system uses SPSS modelling software, the language R for algorithm development and Hadoop data repositories….
The proof of concept draws together data from the University of York’s archaeological data, the Department of the Environment, English Heritage, Scottish Natural Heritage, Ordnance Survey, Forestry Commission, Office for National Statistics, the Land Registry and others….The system analyses sets of indicators of archaeology, including historic population dispersal trends, specific geology, flora and fauna considerations, as well as proximity to a water source, a trail or road, standing stones and other archaeological sites. Earlier studies created a list of 45 indicators which was whittled down to seven for the proof of concept. The team used logistic regression to assess the relationship between input variables and come up with its prediction….”

The Emerging Science of Human-Data Interaction


Emerging Technology From the arXiv: “The rapidly evolving ecosystems associated with personal data is creating an entirely new field of scientific study, say computer scientists. And this requires a much more powerful ethics-based infrastructure….
Now Richard Mortier at the University of Nottingham in the UK and a few pals say the increasingly complex, invasive and opaque use of data should be a call to arms to change the way we study data, interact with it and control its use. Today, they publish a manifesto describing how a new science of human-data interaction is emerging from this “data ecosystem” and say that it combines disciplines such as computer science, statistics, sociology, psychology and behavioural economics.
They start by pointing out that the long-standing discipline of human-computer interaction research has always focused on computers as devices to be interacted with. But our interaction with the cyber world has become more sophisticated as computing power has become ubiquitous, a phenomenon driven by the Internet but also through mobile devices such as smartphones. Consequently, humans are constantly producing and revealing data in all kinds of different ways.
Mortier and co say there is an important distinction between data that is consciously created and released such as a Facebook profile; observed data such as online shopping behaviour; and inferred data that is created by other organisations about us, such as preferences based on friends’ preferences.
This leads the team to identify three key themes associated with human-data interaction that they believe the communities involved with data should focus on.
The first of these is concerned with making data, and the analytics associated with it, both transparent and comprehensible to ordinary people. Mortier and co describe this as the legibility of data and say that the goal is to ensure that people are clearly aware of the data they are providing, the methods used to draw inferences about it and the implications of this.
Making people aware of the data being collected is straightforward but understanding the implications of this data collection process and the processing that follows is much harder. In particular, this could be in conflict with the intellectual property rights of the companies that do the analytics.
An even more significant factor is that the implications of this processing are not always clear at the time the data is collected. A good example is the way the New York Times tracked down an individual after her seemingly anonymized searches were published by AOL. It is hard to imagine that this individual had any idea that the searches she was making would later allow her identification.
The second theme is concerned with giving people the ability to control and interact with the data relating to them. Mortier and co describe this as “agency”. People must be allowed to opt in or opt out of data collection programs and to correct data if it turns out to be wrong or outdated and so on. That will require simple-to-use data access mechanisms that have yet to be developed
The final theme builds on this to allow people to change their data preferences in future, an idea the team call “negotiability”. Something like this is already coming into force in the European Union where the Court of Justice has recently begun to enforce the “right to be forgotten”, which allows people to remove information from search results under certain circumstances….”
Ref: http://arxiv.org/abs/1412.6159  Human-Data Interaction: The Human Face of the Data-Driven Society