Of course we share! Testing Assumptions about Social Tagging Systems


New paper by Stephan Doerfel, Daniel Zoller, Philipp Singer, Thomas Niebler, Andreas Hotho, Markus Strohmaier: “Social tagging systems have established themselves as an important part in today’s web and have attracted the interest from our research community in a variety of investigations. The overall vision of our community is that simply through interactions with the system, i.e., through tagging and sharing of resources, users would contribute to building useful semantic structures as well as resource indexes using uncontrolled vocabulary not only due to the easy-to-use mechanics. Henceforth, a variety of assumptions about social tagging systems have emerged, yet testing them has been difficult due to the absence of suitable data. In this work we thoroughly investigate three available assumptions – e.g., is a tagging system really social? – by examining live log data gathered from the real-world public social tagging system BibSonomy. Our empirical results indicate that while some of these assumptions hold to a certain extent, other assumptions need to be reflected and viewed in a very critical light. Our observations have implications for the design of future search and other algorithms to better reflect the actual user behavior.”

Algorithms and the Changing Frontier


A GMU School of Public Policy Research Paper by Agwara, Hezekiah and Auerswald, Philip E. and Higginbotham, Brian D.: “We first summarize the dominant interpretations of the “frontier” in the United States and predecessor colonies over the past 400 years: agricultural (1610s-1880s), industrial (1890s-1930s), scientific (1940s-1980s), and algorithmic (1990s-present). We describe the difference between the algorithmic frontier and the scientific frontier. We then propose that the recent phenomenon referred to as “globalization” is actually better understood as the progression of the algorithmic frontier, as enabled by standards that in turn have facilitated the interoperability of firm-level production algorithms. We conclude by describing implications of the advance of the algorithmic frontier for scientific discovery and technological innovation.”

Mapping the Data Shadows of Hurricane Sandy: Uncovering the Sociospatial Dimensions of ‘Big Data’


New Paper by Shelton, T., Poorthuis, A., Graham, M., and Zook, M. : “Digital social data are now practically ubiquitous, with increasingly large and interconnected databases leading researchers, politicians, and the private sector to focus on how such ‘big data’ can allow potentially unprecedented insights into our world. This paper investigates Twitter activity in the wake of Hurricane Sandy in order to demonstrate the complex relationship between the material world and its digital representations. Through documenting the various spatial patterns of Sandy-related tweeting both within the New York metropolitan region and across the United States, we make a series of broader conceptual and methodological interventions into the nascent geographic literature on big data. Rather than focus on how these massive databases are causing necessary and irreversible shifts in the ways that knowledge is produced, we instead find it more productive to ask how small subsets of big data, especially georeferenced social media information scraped from the internet, can reveal the geographies of a range of social processes and practices. Utilizing both qualitative and quantitative methods, we can uncover broad spatial patterns within this data, as well as understand how this data reflects the lived experiences of the people creating it. We also seek to fill a conceptual lacuna in studies of user-generated geographic information, which have often avoided any explicit theorizing of sociospatial relations, by employing Jessop et al’s TPSN framework. Through these interventions, we demonstrate that any analysis of user-generated geographic information must take into account the existence of more complex spatialities than the relatively simple spatial ontology implied by latitude and longitude coordinates.”

When Does Transparency Generate Legitimacy? Experimenting on a Context-Bound Relationship


New paper by Jenny De Fine Licht, Daniel Naurin, Peter Esaiasson, and Mikael Gilljam in Governance: “We analyze the main rationale for public administrations and political institutions for supplying transparency, namely, that it generates legitimacy for these institutions. First, we discuss different theories of decision making from which plausible causal mechanisms that may drive a link between transparency and legitimacy may be derived. We find that the common notion of a straightforward positive correlation is naïve and that transparency reforms are rather unpredictable phenomena. Second, we test the effect of transparency on procedure acceptance using vignette experiments of representative decision making in schools. We find that transparency can indeed generate legitimacy. Interestingly, however, the form need not be “fishbowl transparency,” with full openness of the decision-making process. Decision makers may improve their legitimacy simply by justifying carefully afterward the decisions taken behind closed doors. Only when behavior close to a deliberative democratic ideal was displayed did openness of the process generate more legitimacy than closed-door decision making with postdecisional justifications.”

Enhancing Social Innovation by Rethinking Collaboration, Leadership and Public Governance


New paper by Professors Eva Sørensen & Jacob Torfing: “It is widely recognized that public innovation is the intelligent alternative to blind across-the-board-cuts in times of shrinking budgets, and that innovation may help to break policy deadlocks and adjust welfare services to new and changing demands. At the same time, there is growing evidence that multi-actor collaboration in networks, partnerships and interorganizational teams can spur public innovation (Sørensen and Torfing, 2011). The involvement of different public and private actors in public innovation processes improves the understanding of the problem or challenge at hand and brings forth new ideas and proposals. It also ensures that the needs of users, citizens and civil society organizations are taken into account when innovative solutions are selected, tested and implemented.
While a lot of public innovation continues to be driven by particular public employees and managers, there seems to be a significant surge in collaborative forms of innovation that cut across the institutional and organization boundaries within the public sector and involve a plethora of private actors with relevant innovation assets. Indeed, the enhancement of collaborative innovation has be come a key aspiration of many public organizations around the world. However, if we fail to develop a more precise and sophisticated understanding of the concepts of ‘innovation’ and ‘collaboration’, we risk that both terms are reduced to empty and tiresome buzzwords that will not last to the end of the season. Moreover, in reality, collaborative and innovative processes are difficult to trigger and sustain without proper innovation management and a supporting cultural and institutional environment. This insight calls for further reflections on the role of public leadership and management and for a transformation of the entire system of public governing.
Hence, in order to spur collaborative innovation in the public sector, we need to clarify the basic terms of the debate and explore how collaborative innovation can be enhanced by new forms of innovation management and new forms of public governing. To this end, we shall first define the notions of innovation and public innovation and discuss the relation between public innovation and social innovation in order to better understand the purposes of different forms of innovation.
We shall then seek to clarify the notion of collaboration and pinpoint why and how collaboration enhances public innovation. Next, we shall offer some theoretical and practical reflections about how public leaders and managers can advance collaborative innovation. Finally, we shall argue that the enhancement of collaborative forms of social innovation calls for a transformation of the system of public governing that shifts the balance from New Public Management towards New Public Governance.”

E-government and organisational transformation of government: Black box revisited?


New paper in Government Information Quarterly: “During the e-government era the role of technology in the transformation of public sector organisations has significantly increased, whereby the relationship between ICT and organisational change in the public sector has become the subject of increasingly intensive research over the last decade. However, an overview of the literature to date indicates that the impacts of e-government on the organisational transformation of administrative structures and processes are still relatively poorly understood and vaguely defined.

The main purpose of the paper is therefore the following: (1) to examine the interdependence of e-government development and organisational transformation in public sector organisations and propose a clearer explanation of ICT’s role as a driving force of organisational transformation in further e-government development; and (2) to specify the main characteristics of organisational transformation in the e-government era through the development of a new framework. This framework describes organisational transformation in two dimensions, i.e. the ‘depth’ and the ‘nature’ of changes, and specifies the key attributes related to the three typical organisational levels.”

Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk


New paper by Catherine E. Schmitt-Sands and Richard J. Smith: “While the internet has created new opportunities for research, managing the increased complexity of relationships and knowledge also creates challenges. Amazon.com has a Mechanical Turk service that allows people to crowdsource simple tasks for a nominal fee. The online workers may be anywhere in North America or India and range in ability. Social science researchers are only beginning to use this service. While researchers have used crowdsourcing to find research subjects or classify texts, we used Mechanical Turk to conduct a policy scan of local government websites. This article describes the process used to train and ensure quality of the policy scan. It also examines choices in the context of research ethics.”

From funding agencies to scientific agency –


New paper on “Collective allocation of science funding as an alternative to peer review”: “Publicly funded research involves the distribution of a considerable amount of money. Funding agencies such as the US National Science Foundation (NSF), the US National Institutes of Health (NIH) and the European Research Council (ERC) give billions of dollars or euros of taxpayers’ money to individual researchers, research teams, universities, and research institutes each year. Taxpayers accordingly expect that governments and funding agencies will spend their money prudently and efficiently.

Investing money to the greatest effect is not a challenge unique to research funding agencies and there are many strategies and schemes to choose from. Nevertheless, most funders rely on a tried and tested method in line with the tradition of the scientific community: the peer review of individual proposals to identify the most promising projects for funding. This method has been considered the gold standard for assessing the scientific value of research projects essentially since the end of the Second World War.

However, there is mounting critique of the use of peer review to direct research funding. High on the list of complaints is the cost, both in terms of time and money. In 2012, for example, NSF convened more than 17,000 scientists to review 53,556 proposals [1]. Reviewers generally spend a considerable time and effort to assess and rate proposals of which only a minority can eventually get funded. Of course, such a high rejection rate is also frustrating for the applicants. Scientists spend an increasing amount of time writing and submitting grant proposals. Overall, the scientific community invests an extraordinary amount of time, energy, and effort into the writing and reviewing of research proposals, most of which end up not getting funded at all. This time would be better invested in conducting the research in the first place.

Peer review may also be subject to biases, inconsistencies, and oversights. The need for review panels to reach consensus may lead to sub‐optimal decisions owing to the inherently stochastic nature of the peer review process. Moreover, in a period where the money available to fund research is shrinking, reviewers may tend to “play it safe” and select proposals that have a high chance of producing results, rather than more challenging and ambitious projects. Additionally, the structuring of funding around calls‐for‐proposals to address specific topics might inhibit serendipitous discovery, as scientists work on problems for which funding happens to be available rather than trying to solve more challenging problems.

The scientific community holds peer review in high regard, but it may not actually be the best possible system for identifying and supporting promising science. Many proposals have been made to reform funding systems, ranging from incremental changes to peer review—including careful selection of reviewers [2] and post‐hoc normalization of reviews [3]—to more radical proposals such as opening up review to the entire online population [4] or removing human reviewers altogether by allocating funds through an objective performance measure [5].

We would like to add another alternative inspired by the mathematical models used to search the internet for relevant information: a highly decentralized funding model in which the wisdom of the entire scientific community is leveraged to determine a fair distribution of funding. It would still require human insight and decision‐making, but it would drastically reduce the overhead costs and may alleviate many of the issues and inefficiencies of the proposal submission and peer review system, such as bias, “playing it safe”, or reluctance to support curiosity‐driven research.

Our proposed system would require funding agencies to give all scientists within their remit an unconditional, equal amount of money each year. However, each scientist would then be required to pass on a fixed percentage of their previous year’s funding to other scientists whom they think would make best use of the money (Fig 1). Every year, then, scientists would receive a fixed basic grant from their funding agency combined with an elective amount of funding donated by their peers. As a result of each scientist having to distribute a given percentage of their previous year’s budget to other scientists, money would flow through the scientific community. Scientists who are generally anticipated to make the best use of funding will accumulate more.”

Design in public and social innovation


New paper by Geoff Mulgan (Nesta): “What’s going right and what’s going wrong? Is design a key to more efficient and effective public services, or a costly luxury, good for conferences and consultants but not for the public?
This paper looks at the elements of the design method; the strengths of current models; some of their weaknesses and the common criticisms made of them; and what might be the way forward.

Contents:

  • Strengths of the design method in social innovation and public service
  • Social design tools table
  • Weaknesses of design projects and methods
  • The challenge
  • Design in the context of innovation”

Social media in crisis events: Open networks and collaboration supporting disaster response and recovery


Paper for the IEEE International Conference on Technologies for Homeland Security (HST): “Large-scale crises challenge the ability of public safety and security organisations to respond efficient and effectively. Meanwhile, citizens’ adoption of mobile technology and rich social media services is dramatically changing the way crisis responses develop. Empowered by new communication media (smartphones, text messaging, internet-based applications and social media), citizens are the in situ first sensors. However, this entire social media arena is unchartered territory to most public safety and security organisations. In this paper, we analyse crisis events to draw narratives on social media relevance and describe how public safety and security organisations are increasingly aware of social media’s added value proposition in times of crisis. A set of critical success indicators to address the process of adopting social media is identified, so that social media information is rapidly transformed into actionable intelligence, thus enhancing the effectiveness of public safety and security organisations — saving time, money and lives.”