Paper by Bernd W. Wirtz & Steven Birkmeyer in the International Journal of Public Administration: “The term “open government” is frequently used in practice and science. Since President Obama’s Memorandum for the Heads of Executive Departments and Agencies in March 2009, open government has attracted an enormous amount of public attention. It is applied by authors from diverse areas, leading to a very heterogeneous comprehension of the concept. Against this background, this article screens the current open government literature to deduce an integrative definition of open government. Furthermore, this article analyzes the empirical and conceptual literature of open government to deduce an open government framework. In general, this article provides a clear understanding of the open government concept. (More)”
The new scientific revolution: Reproducibility at last
Washington Post:”…Reproducibility is a core scientific principle. A result that can’t be reproduced is not necessarily erroneous: Perhaps there were simply variables in the experiment that no one detected or accounted for. Still, science sets high standards for itself, and if experimental results can’t be reproduced, it’s hard to know what to make of them.
“The whole point of science, the way we know something, is not that I trust Isaac Newton because I think he was a great guy. The whole point is that I can do it myself,” said Brian Nosek, the founder of a start-up in Charlottesville, Va., called the Center for Open Science. “Show me the data, show me the process, show me the method, and then if I want to, I can reproduce it.”
The reproducibility issue is closely associated with a Greek researcher, John Ioannidis, who published a paper in 2005 with the startling title “Why Most Published Research Findings Are False.”
Ioannidis, now at Stanford, has started a program to help researchers improve the reliability of their experiments. He said the surge of interest in reproducibility was in part a reflection of the explosive growth of science around the world. The Internet is a factor, too: It’s easier for researchers to see what everyone else is doing….
Errors can potentially emerge from a practice called “data dredging”: When an initial hypothesis doesn’t pan out, the researcher will scan the data for something that looks like a story. The researcher will see a bump in the data and think it’s significant, but the next researcher to come along won’t see it — because the bump was a statistical fluke….
So far about 7,000 people are using that service, and the center has received commitments for $14 million in grants, with partners that include the National Science Foundation and the National Institutes of Health, Nosek said.
Another COS initiative will help researchers register their experiments in advance, telling the world exactly what they plan to do, what questions they will ask. This would avoid the data-dredging maneuver in which researchers who are disappointed go on a deep dive for something publishable.
Nosek and other reformers talk about “publication bias.” Positive results get reported, negative results ignored. Someone reading a journal article may never know about all the similar experiments that came to naught….(More).”
Competition-Based Innovation: The Case of the X Prize Foundation
Paper by Hossain, Mokter and Kauranen, Ilkka, in the Journal of Organization Design,/SSRN: “The use of competition-based processes for the development of innovations is increasing. In parallel with the increasing use of competition-based innovation in business firms, this model of innovation is successfully being used by non-profit organizations for advancing the development of science and technology. One such non-profit organization is the X Prize Foundation, which designs and manages innovation competitions to encourage scientific and technological development. The objective of this article is to analyze the X Prize Foundation and three of the competitions it has organized in order to identify the challenges of competition-based innovation and how to overcome them….(More)”.
Nudging and Choice Architecture: Ethical Considerations
New paper by Cass Sunstein at Yale Journal on Regulation (via SSRN): “Is nudging unethical? Is choice architecture a problem for a free society? This essay defends seven propositions: (1) It is pointless to object to choice architecture or nudging as such. Choice architecture cannot be avoided. Nature itself nudges; so does the weather; so do customs and traditions; so do spontaneous orders and invisible hands. The private sector inevitably nudges, as does the government. It is reasonable to worry about nudges by government and to object to particular nudges, but not to nudging in general. (2) In this context, ethical abstractions (for example, about autonomy, dignity, manipulation, and democratic self-government) can create serious confusion. To make progress, those abstractions must be brought into contact with concrete practices. Nudging and choice architecture take highly diverse forms, and the force of an ethical objection depends on the specific form. (3) If welfare is our guide, much nudging is actually required on ethical grounds, even if it comes from government. (4) If autonomy is our guide, much nudging is also required on ethical grounds, in part because some nudges actually promote autonomy, in part because some nudges enable people to devote their limited time and attention to their most important concerns. (5) Choice architecture should not, and need not, compromise either dignity or self-government, but it is important to see that imaginable forms could do both. It follows that when they come from government, choice architecture and nudges should not be immune from a burden of justification, which they might not be able to overcome. (6) Some nudges are objectionable because the choice architect has illicit ends. When the ends are legitimate, and when nudges are fully transparent and subject to public scrutiny, a convincing ethical objection is less likely to be available. (7) There is ample room for ethical objections in the case of well-motivated but manipulative interventions, certainly if people have not consented to them; such nudges can undermine autonomy and dignity. It follows that both the concept and the practice of manipulation deserve careful attention. The concept of manipulation has a core and a periphery; some interventions fit within the core, others within the periphery, and others outside of both….(More)”
Social Sensing and Crowdsourcing: the future of connected sensors
Conference Paper by C. Geijer, M. Larsson, M. Stigelid: “Social sensing is becoming an alternative to static sensors. It is a way to crowdsource data collection where sensors can be placed on frequently used objects, such as mobile phones or cars, to gather important information. Increasing availability in technology, such as cheap sensors being added in cell phones, creates an opportunity to build bigger sensor networks that are capable of collecting a larger quantity and more complex data. The purpose of this paper is to highlight problems in the field, as well as their solutions. The focus lies on the use of physical sensors and not on the use of social media to collect data. Research papers were reviewed based on implemented or suggested implementations of social sensing. The discovered problems are contrasted with possible solutions, and used to reflect upon the future of the field. We found issues such as privacy, noise and trustworthiness to be problems when using a distributed network of sensors. Furthermore, we discovered models for determining the accuracy as well as truthfulness of gathered data that can effectively combat these problems. The topic of privacy remains an open-ended problem, since it is based upon ethical considerations that may differ from person to person, but there exists methods for addressing this as well. The reviewed research suggests that social sensing will become more and more useful in the future….(More).”
New Evidence that Citizen Engagement Increases Tax Revenues
Tiago Peixoto at DemocracySpot: “…A new working paper published by Diether Beuermann and Maria Amelina present the results of a randomized experiment in Russia, described in the abstract below:
This paper provides the first experimental evaluation of the participatory budgeting model showing that it increased public participation in the process of public decision making, increased local tax revenues collection, channeled larger fractions of public budgets to services stated as top priorities by citizens, and increased satisfaction levels with public services. These effects, however, were found only when the model was implemented in already-mature administratively and politically decentralized local governments. The findings highlight the importance of initial conditions with respect to the decentralization context for the success of participatory governance.
In my opinion, this paper is important for a number of reasons, some of which are worth highlighting here. First, it adds substantive support to the evidence on the positive relationship between citizen engagement and tax revenues. Second, in contrast to studies suggesting that participatory innovations are most likely to work when they are “organic”, or “bottom-up”, this paper shows how external actors can induce the implementation of successful participatory experiences. Third, I could not help but notice that two commonplace explanations for the success of citizen engagement initiatives, “strong civil society” and “political will”, do not feature in the study as prominent success factors. Last, but not least, the paper draws attention to how institutional settings matter (i.e. decentralization). Here, the jack-of-all-trades (yet not very useful) “context matters”, could easily be replaced by “institutions matter”….(More). You can read the full paper here [PDF].”
Do Experts or Collective Intelligence Write with More Bias? Evidence from Encyclopædia Britannica and Wikipedia.
- The costs of producing, storing, and distributing knowledge shape different biases and slants in the collective intelligence (Wikipedia) and the expert-based model (Britannica).
- Many of the differences between Wikipedia and Britannica arise because Wikipedia faces insignificant storage, production, and distribution costs. This leads to longer articles with greater coverage of more points of view. The number of revisions of Wikipedia articles results in more neutral point of view. In the best cases, it reduces slant and bias to a negligible difference with an expert-based model.
- As the world moves from reliance on expert-based production of knowledge to collectively-produced intelligence, it is unwise to blindly trust the properties of knowledge produced by the crowd. Their slants and biases are not widely appreciated, nor are the properties of the production model as yet fully understood.”…(More)
Can 311 Call Centers Improve Service Delivery? Lessons from New York and Chicago
Paper by Jane Wiseman: “This paper is the first of the IDB’s “Innovations in Public Service Delivery” series, which identifies and analyzes innovative experiences of promising practices in Latin America and the Caribbean and around the world to improve the quality and delivery of public services. It presents the 311 Programs in New York City and Chicago, leading 311 centers in the United States. “311” is the universal toll-free number that provides citizens with a single point of entry to a wide array of information and services in major cities. In the cities studied, these centers have evolved to support new models of service delivery management. This publication provides an overview of these programs, analyzing their design and implementation, results, and impacts, and identifying their success factors. The final section consolidates the lessons learned from these experiences, highlighting what policymakers and public officials should consider when developing similar solutions…Download in PDF“.
Is Transparency a Recipe for Innovation?
Paper by Dr. Bastiaan Heemsbergen: “Innovation is a key driver in organizational sustainability, and yes, openness and transparency are a recipe for innovation. But, according to Tapscott and Williams, “when it comes to innovation, competitive advantage and organizational success, ‘openness’ is rarely the first word one would use to describe companies and other societal organizations like government agencies or medical institutions. For many, words like ‘insular,’ ‘bureaucratic,’ ‘hierarchical,’ ‘secretive’ and ‘closed’ come to mind instead.”1 And yet a few months ago, The Tesla Model S just became the world’s first open-source car. Elon Musk, CEO of Tesla Motor Vehicles, shared all the patents on Tesla’s electric car technology, allowing anyone — including competitors — to use them without fear of litigation. Elon wrote in his post “Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.”2
In the public sector, terms such as open government, citizen sourcing, and wiki government are also akin to the notion of open innovation and transparency. As Hilgers and Ihl report, “a good example of this approach is the success of the Future Melbourne program, a Wiki and blog-based approach to shaping the future urban landscape of Australia’s second largest city. The program allowed citizens to directly edit and comment on the plans for the future development of the city. It attracted more than 30,000 individuals, who submitted hundreds of comments and suggestions (futuremelbourne.com.au). Basically, problems concerning design and creativity, future strategy and local culture, and even questions of management and service innovation can be broadcasted on such web-platforms.”3 The authors suggest that there are three dimensions to applying the concept of open innovation to the public sector: citizen ideation and innovation (tapping knowledge and creativity), collaborative administration (user generated new tasks and processes), and collaborative democracy (improve public participation in the policy process)….(More)”.
Computer-based personality judgments are more accurate than those made by humans
Paper by Wu Youyou, Michal Kosinski and David Stillwell at PNAS (Proceedings of the National Academy of Sciences): “Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy…(More)”.