On the importance of being negative


Stephen Curry in The Guardian: “The latest paper from my group, published just over a week ago in the open access journal PeerJ, reports an unusual result. It was not the result we were looking for because it was negative: our experiment failed.

Nevertheless I am pleased with the paper – negative results matter. Their value lies in mapping out blind alleys, warning other investigators not to waste their time or at least to tread carefully. The only trouble is, it can be hard to get them published.

The scientific literature has long been skewed by a preponderance of positive results, largely because journals are keen to nurture their reputations for publishing significant, exciting research – new discoveries that change the way we think about the world. They have tended to look askance at manuscripts reporting beautiful hypotheses undone by the ugly fact of experimental failure. Scientific reporting inverts the traditional values of news media: good news sells. This tendency is reinforced within academic culture because our reward mechanisms are so strongly geared to publication in the most prestigious journals. In the worst cases it can foster fraudulent or sloppy practices by scientists and journals. A complete record of reporting positive and negative results is at the heart of the AllTrials campaign to challenge the distortion of clinical trials for commercial gain….

Normally that would have been that. Our data would have sat on the computer hard-drive till the machine decayed to obsolescence and was thrown out. But now it’s easier to publish negative results, so we did. The change has come about because of the rise of online publishing through open access, which aims to make research freely available on the internet.

The most significant change is the emergence of new titles from nimble-footed publishers aiming to leverage the reduced costs of publishing digitally rather than on paper. They have created open access journals that judge research only on its originality and competency; in contrast to more traditional outlets, no attempt is made to pre-judge significance. These journals include titles such as PLOS ONE (the originator of the concept), F1000 Research, ScienceOpen, and Scientific Reports, as well as new pre-print servers, such as PeerJ Preprints or bioaRXiv, which are seeking to emulate the success of the ArXiv that has long served physics and maths researchers.

As far as I know, these outlets were not designed specifically for negative results but the shift in the review criteria – and their lower costs – have opened up new opportunities and negative results are now creeping out of the laboratory in greater numbers. PLOS ONE has recently started to highlight collections of papers reporting negative findings; Elsevier, one of the more established publishers, has evidently sensed an opportunity and just launched a new journal dedicated to negative results in the plant sciences….(More)”

Collective Intelligence or Group Think?


Paper analyzing “Engaging Participation Patterns in World without Oil” by Nassim JafariNaimi and Eric M. Meyers: “This article presents an analysis of participation patterns in an Alternate Reality Game, World Without Oil. This game aims to bring people together in an online environment to reflect on how an oil crisis might affect their lives and communities as a way to both counter such a crisis and to build collective intelligence about responding to it. We present a series of participation profiles based on a quantitative analysis of 1554 contributions to the game narrative made by 322 players. We further qualitatively analyze a sample of these contributions. We outline the dominant themes, the majority of which engage the global oil crisis for its effects on commute options and present micro-sustainability solutions in response. We further draw on the quantitative and qualitative analysis of this space to discuss how the design of the game, specifically its framing of the problem, feedback mechanism, and absence of subject-matter expertise, counter its aim of generating collective intelligence, making it conducive to groupthink….(More)”

Wittgenstein, #TheDress and Google’s search for a bigger truth


Robert Shrimsley at the Financial Times: “As the world burnt with a BuzzFeed-prompted debate over whether a dress was black and blue or white and gold, the BBC published a short article posing the question everyone was surely asking: “What would Wittgenstein say about that dress?

Wittgenstein died in 1951, so we cannot know if the philosopher of language, truth and context would have been a devotee of BuzzFeed. (I guess it depends on whether we are talking of the early or the late Ludwig. The early Wittgenstein, it is well known, was something of an enthusiast for LOLs, whereas the later was more into WTFs and OMGs.)

The dress will now join the pantheon of web phenomena such as “Diet Coke and Mentos” and “Charlie bit my finger”. But this trivial debate on perceived truth captured in miniature a wider issue for the web: how to distil fact from noise when opinion drowns out information and value is determined by popularity.

At about the same time as the dress was turning the air blue — or was it white? — the New Scientist published a report on how one web giant might tackle this problem, a development in which Wittgenstein might have been very interested. The magazine reported on a Google research paper about how the company might reorder its search rankings to promote sites that could be trusted to tell the truth. (Google produces many such papers a year so this is a long way short of official policy.) It posits a formula for finding and promoting sites with a record of reliability.

This raises an interesting question over how troubled we should be by the notion that a private company with its own commercial interests and a huge concentration of power could be the arbiter of truth. There is no current reason to see sinister motives in Google’s search for a better web: it is both honourable and good business. But one might ask how, for example, Google Truth might determine established truths on net neutrality….

The paper suggests using fidelity to proved facts as a proxy for trust. This is easiest with single facts, such as a date or place of birth. For example, it suggests claiming Barack Obama was born in Kenya would push a site down the rankings. This would be good for politics but facts are not always neutral. Google would risk being depicted as part of “the mainstream media”. Fox Search here we come….(More)”

Models and Patterns of Trust


Paper presented by Bran Knowles et al at the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing: “As in all collaborative work, trust is a vital ingredient of successful computer supported cooperative work, yet there is little in the way of design principles to help practitioners develop systems that foster trust. To address this gap, we present a set of design patterns, based on our experience designing systems with the explicit intention of increasing trust between stakeholders. We contextualize these patterns by describing our own learning process, from the development, testing and refinement of a trust model, to our realization that the insights we gained along the way were most usefully expressed through design patterns. In addition to a set of patterns for trust, this paper seeks to demonstrate of the value of patterns as a means of communicating the nuances revealed through ethnographic investigation….(More)

‘Data.gov-in-a-box’: Delimiting transparency


New paper by Clare Birchall in the European Journal of Social Theory: “Given that the Obama administration still relies on many strategies we would think of as sitting on the side of secrecy, it seems that the only lasting transparency legacy of the Obama administration will be data-driven or e-transparency as exemplified by the web interface ‘data.gov’. As the data-driven transparency model is exported and assumes an ascendant position around the globe, it is imperative that we ask what kind of publics, subjects, and indeed, politics it will produce. Open government data is not just a matter concerning accountability but is seen as a necessary component of the new ‘data economy’. To participate and benefit from this info-capitalist-democracy, the data subject is called upon to be both auditor and entrepreneur. This article explores the implications of responsibilization, outsourcing, and commodification on the contract of representational democracy and asks if there are other forms of transparency that might better resist neoliberal formations and re-politicize the public sphere….(More)”

Breaking Public Administrations’ Data Silos. The Case of Open-DAI, and a Comparison between Open Data Platforms.


Paper by Raimondo Iemma, Federico Morando, and Michele Osella: “An open reuse of public data and tools can turn the government into a powerful ‘platform’ also involving external innovators. However, the typical information system of a public agency is not open by design. Several public administrations have started adopting technical solutions to overcome this issue, typically in the form of middleware layers operating as ‘buses’ between data centres and the outside world. Open-DAI is an open source platform designed to expose data as services, directly pulling from legacy databases of the data holder. The platform is the result of an ongoing project funded under the EU ICT PSP call 2011. We present the rationale and features of Open-DAI, also through a comparison with three other open data platforms: the Socrata Open Data portal, CKAN, and ENGAGE….(More)”

The Power of Heuristics


ideas42: “People are presented with many choices throughout their day, from what to have for lunch to where to go on vacation to how much money to save for emergencies. In many situations, this ability to choose enhances our lives. However, having too many choices can sometimes feel like a burden, especially if the choices are complex or the decisions we’re making are important. In these instances, we often make poor decisions, or sometimes even fail to choose at all. This can create real problems, for example when people fail to save enough for retirement or don’t make the right choices when it comes to staying healthy.
So why is it that so much effort has been spent trying to improve decision-making by giving people even more information about the choices available – often complicating the choice even further?
In a new paper by ideas42, ideas42 co-founder Antoinette Schoar of MIT’s Sloan School of Management, and ideas42’s Saugato Datta argue that this approach of providing more information to help individuals make better decisions is flawed, “since it does not take into account the psychological or behavioral barriers that prevent people from making better decisions.” The solution, they propose, is using effective rules of thumb, or ‘heuristics’, to “enable people to make ‘reasonably good’ decisions without needing to understand all the complex nuances of the situation.” The paper explores the effectiveness of heuristics as a tool to simplify information during decision-making and help people follow through on their intentions. The authors offer powerful examples of effective heuristics-based methods in three domains: financial education, agriculture, and medicine….(More)”

Pantheon: A Dataset for the Study of Global Cultural Production


Paper by Amy Zhao Yu, Shahar Ronen, Kevin Hu, Tiffany Lu, and César A. Hidalgo: “We present the Pantheon 1.0 dataset: a manually curated dataset of individuals that have transcended linguistic, temporal, and geographic boundaries. The Pantheon 1.0 dataset includes the 11,341 biographies present in more than 25 languages in Wikipedia and is enriched with: (i) manually curated demographic information (place of birth, date of birth, and gender), (ii) a cultural domain classification categorizing each biography at three levels of aggregation (i.e. Arts/Fine Arts/Painting), and (iii) measures of global visibility (fame) including the number of languages in which a biography is present in Wikipedia, the monthly page-views received by a biography (2008-2013), and a global visibility metric we name the Historical Popularity Index (HPI). We validate our measures of global visibility (HPI and Wikipedia language editions) using external measures of accomplishment in several cultural domains: Tennis, Swimming, Car Racing, and Chess. In all of these cases we find that measures of accomplishments and fame (HPI) correlate with an R250, suggesting that measures of global fame are appropriate proxies for measures of accomplishment….(More)

Measuring government impact in a social media world


Arthur Mickoleit & Ryan Androsoff at OECD Insights: “There is hardly a government around the world that has not yet felt the impact of social media on how it communicates and engages with citizens. And while the most prominent early adopters in the public sector have tended to be politicians (think of US President Barack Obama’s impressive use of social media during his 2008 campaign), government offices are also increasingly jumping on the bandwagon. Yes, we are talking about those – mostly bricks-and-mortar – institutions that often toil away from the public gaze, managing the public administration in our countries. As the world changes, they too are increasingly engaging in a very public way through social media.
Research from our recent OECD working paper “Social Media Use by Governments” shows that as of November 2014, out of 34 OECD countries, 28 have a Twitter account for the office representing the top executive institution (head of state, head of government, or government as a whole), and 21 have a Facebook account….
 
But what is the impact governments can or should expect from social media? Is it all just vanity and peer pressure? Surely not.
Take the Spanish national police force (e.g. on Twitter, Facebook & YouTube), a great example of using social media to build long-term engagement, trust and a better public service. The thing so many governments yearn for, in this case the Spanish police seem to have managed well.
Or take the Danish “tax daddy” on Twitter – @Skattefar. It started out as the national tax administration’s quest to make it easier for everyone to submit correct tax filings; it is now one of the best examples around of a tax agency gone social.
Government administrations can use social media for internal purposes too. The Government of Canada used public platforms like Twitter and internal platforms like GCpedia and GCconnex to conduct a major employee engagement exercise (Blueprint 2020) to develop a vision for the future of the Canadian federal public service.
And when it comes to raising efficiency in the public sector, read this account of a Dutch research facility’s Director who decided to stop email. Not reduce it, but stop it altogether and replace it with social media.
There are so many other examples that could be cited. But the major question is how can we even begin to appraise the impact of these different initiatives? Because as we’ve known since the 19th century, “if you cannot measure it, you cannot improve it” (quote usually attributed to Lord Kelvin). Some aspects of impact measurement for social media can be borrowed from the private sector with regards to presence, popularity, penetration, and perception. But it’s around purpose that impact measurement agendas will split between the private sector and government. Virtually all companies will want to calculate the return on social media investments based on whether it helps them improve their financial returns. That’s different in the public sector where purpose is rarely defined in commercial terms.
A good impact assessment for social media in the public sector therefore needs to be built around its unique purpose-orientation. This is much more difficult to measure and it will involve a mix of quantitative data (e.g. reach of target audience) and qualitative data (e.g. case studies describing tangible impact). Social Media Use by Governments proposes a framework to start looking at social media measurement in gradual steps – from measuring presence, to popularity, to penetration, to perception, and finally, to purpose-orientation. The aim of this framework is to help governments develop truly relevant metrics and start treating social media activity by governments with the same public management rigour that is applied to other government activities. You can see a table summarising the framework by clicking on the thumbnail below.
This is far from an exact science, but we are beginning the work collaborating with member and partner governments to develop a toolkit that will help decision-makers implement the OECD Recommendation on Digital Government Strategies, including on the issue of social media metrics…(More)”.

Urban technology analysis matrix


New Paper by  Pablo Emilio Branchi , Carlos Fernández-Valdivielso , and Ignacio Raúl Matías: “Our objective is to develop a method for better analyzing the utility and impact of new technologies on Smart Cities. We have designed a tool that will evaluate new technologies according to a three-pronged scoring system that considers the impact on physical space, environmental issues, and city residents. The purpose of this tool is to be used by city planners as part of a strategic approach to the implementation of a Smart City initiative in order to reduce unnecessary public spending and ensure the optimal allocation of city resources….

The paper provides a list of the different elements to be analyzed in Smart Cities in the form of a matrix and develops the methodology to evaluate them in order to obtain a final score for technologies prior to its application in cities….Traditional technological scenarios have been challenged, and Smart Cities have become the center of urban competitiveness. A lack of clarity has been detected in the way of describing what Smart Cities are, and we try to establish a methodology for urban policy makers to do so. As a dynamic process that affects several aspects, researchers are encouraged to test the proposed solution further. (More)”