Paper by Luca Congiu and Ivan Moscati in Behavioural Public Policy: “We argue that the diverse components of a choice architecture can be classified into two main dimensions – Message and Environment – and that the distinction between them is useful in order to better understand how nudges work. In the first part of this paper, we define what we mean by nudge, explain what Message and Environment are, argue that the distinction between them is conceptually robust and show that it is also orthogonal to other distinctions advanced in the nudge literature. In the second part, we review some common types of nudges and show they target either Message or Environment or both dimensions of the choice architecture. We then apply the Message–Environment framework to discuss some features of Amazon’s website and, finally, we indicate how the proposed framework could help a choice architect to design a new choice architecture….(More)”.
Understanding Data Use: Building M&E Systems that Empower Users
Paper by Susan Stout, Vinisha Bhatia, and Paige Kirby: “We know that Monitoring and Evaluation (M&E) aims to support accountability and learning, in order to drive better outcomes…The paper, Understanding Data Use: Building M&E Systems that Empower Users, emphasizes how critical it is for decision makers to consider users’ decision space – from the institutional all the way to technical levels – in achieving data uptake.
Specifically, we call on smart mapping of this decision space – what do intended M&E users need, and what institutional factors shape those needs? With this understanding, we can better anticipate what types of data are most useful, and invest in systems to support data-driven decision making and better outcomes.
Mapping decision space is essential to understanding M&E data use. And as we’ve explored before, the development community has the opportunity to unlock existing resources to access more and better data that fits the needs of development actors to meet the SDGs….(More)”.
Crowdsourcing – a New Paradigm of Organisational Learning of Public Organisation
Paper by Regina Lenart-Gansiniec and Łukasz Sułkowski: “Crowdsourcing is one of the new themes that has appeared in the last decade. Considering its potential, more and more organisations reach for it. It is perceived as an innovative method that can be used for problem solving, improving business processes, creating open innovations, building a competitive advantage, and increasing transparency and openness of the organisation. Crowdsourcing is also conceptualised as a source of a knowledge-based organisation. The importance of crowdsourcing for organisational learning is seen as one of the key themes in the latest literature in the field of crowdsourcing. Since 2008, there has been an increase in the interest of public organisations in crowdsourcing and including it in their activities.
This article is a response to the recommendations in the subject literature, which states that crowdsourcing in public organisations is a new and exciting research area. The aim of the article is to present a new paradigm that combines crowdsourcing levels with the levels of learning. The research methodology is based on an analysis of the subject literature and exemplifications of organisations which introduce crowdsourcing. This article presents a cross-sectional study of four Polish municipal offices that use four types of crowdsourcing, according to the division by J. Howe: collective intelligence, crowd creation, crowd voting, and crowdfunding. Semi-structured interviews were conducted with the management personnel of those municipal offices. The research results show that knowledge acquired from the virtual communities allows the public organisation to anticipate changes, expectations, and needs of citizens and to adapt to them. It can therefore be considered that crowdsourcing is a new and rapidly developing organisational learning paradigm….(More)”
Origin Privacy: Protecting Privacy in the Big-Data Era
Paper by Helen Nissenbaum, Sebastian Benthall, Anupam Datta, Michael Carl Tschantz, and Piot Mardziel: “Machine learning over big data poses challenges for our conceptualization of privacy. Such techniques can discover surprising and counteractive associations that take innocent looking data and turns it into important inferences about a person. For example, the buying carbon monoxide monitors has been linked to paying credit card bills, while buying chrome-skull car accessories predicts not doing so. Also, Target may have used the buying of scent-free hand lotion and vitamins as a sign that the buyer is pregnant. If we take pregnancy status to be private and assume that we should prohibit the sharing information that can reveal that fact, then we have created an unworkable notion of privacy, one in which sharing any scrap of data may violate privacy.
Prior technical specifications of privacy depend on the classification of certain types of information as private or sensitive; privacy policies in these frameworks limit access to data that allow inference of this sensitive information. As the above examples show, today’s data rich world creates a new kind of problem: it is difficult if not impossible to guarantee that information does notallow inference of sensitive topics. This makes information flow rules based on information topic unstable.
We address the problem of providing a workable definition of private data that takes into account emerging threats to privacy from large-scale data collection systems. We build on Contextual Integrity and its claim that privacy is appropriate information flow, or flow according to socially or legally specified rules.
As in other adaptations of Contextual Integrity (CI) to computer science, the parameterization of social norms in CI is translated into a logical specification. In this work, we depart from CI by considering rules that restrict information flow based on its origin and provenance, instead of on it’s type, topic, or subject.
We call this concept of privacy as adherence to origin-based rules Origin Privacy. Origin Privacy rules can be found in some existing data protection laws. This motivates the computational implementation of origin-based rules for the simple purpose of compliance engineering. We also formally model origin privacy to determine what security properties it guarantees relative to the concerns that motivate it….(More)”.
‘To own or not to own?’ A study on the determinants and consequences of alternative intellectual property rights arrangements in crowdsourcing for innovation contests
Paper by Nuran Acur, Mariangela Piazza and Giovanni Perrone: “Firms are increasingly engaging in crowdsourcing for innovation to access new knowledge beyond their boundaries; however, scholars are no closer to understanding what guides seeker firms in deciding the level at which to acquire rights from solvers and the effect that this decision has on the performance of crowdsourcing contests.
Integrating Property Rights Theory and the problem solving perspective whist leveraging exploratory interviews and observations, we build a theoretical framework to examine how specific attributes of the technical problem broadcast affect the seekers’ choice between alternative intellectual property rights (IPR) arrangements that call for acquiring or licensing‐in IPR from external solvers (i.e. with high and low degrees of ownership respectively). Each technical problem differs in the knowledge required to be solved as well as in the stage of development it occurs of the innovation process and seeker firms pay great attention to such characteristics when deciding about the IPR arrangement they choose for their contests.
In addition, we analyze how this choice between acquiring and licensing‐in IPR, in turn, influences the performance of the contest. We empirically test our hypotheses analyzing a unique dataset of 729 challenges broadcast on the InnoCentive platform from 2010 to 2016. Our results indicate that challenges related to technical problems in later stages of the innovation process are positively related to the seekers’ preference toward IPR arrangements with a high level of ownership, while technical problems involving a higher number of knowledge domains are not.
Moreover, we found that IPR arrangements with a high level of ownership negatively affect solvers’ participation and that IPR arrangement plays a mediating role between the attributes of the technical problem and the solvers’ self‐selection process. Our article contributes to the open innovation and crowdsourcing literature and provides practical implications for both managers and contest organizers….(More)”.
Citizen science, public policy
Paper by Christi J. Guerrini, Mary A. Majumder, Meaganne J. Lewellyn, and Amy L. McGuire in Science: “Citizen science initiatives that support collaborations between researchers and the public are flourishing. As a result of this enhanced role of the public, citizen science demonstrates more diversity and flexibility than traditional science and can encompass efforts that have no institutional affiliation, are funded entirely by participants, or continuously or suddenly change their scientific aims.
But these structural differences have regulatory implications that could undermine the integrity, safety, or participatory goals of particular citizen science projects. Thus far, citizen science appears to be addressing regulatory gaps and mismatches through voluntary actions of thoughtful and well-intentioned practitioners.
But as citizen science continues to surge in popularity and increasingly engage divergent interests, vulnerable populations, and sensitive data, it is important to consider the long-term effectiveness of these private actions and whether public policies should be adjusted to complement or improve on them. Here, we focus on three policy domains that are relevant to most citizen science projects: intellectual property (IP), scientific integrity, and participant protections….(More)”.
What is mechanistic evidence, and why do we need it for evidence-based policy?
Paper by Caterina Marchionni and Samuli Reijula: “It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so. Although agreeing with the spirit of this proposal, we argue that the underlying conception of mechanistic evidence as evidence that is different in kind from correlational, difference-making or statistical evidence, does not correctly capture the role that information about mechanisms should play in evidence-based policy. We offer an alternative account of mechanistic evidence as information concerning the causal pathway connecting the policy intervention to its outcome. Not only can this be analyzed as evidence of difference-making, it is also to be found at any level and is obtainable by a broad range of methods, both experimental and observational. Using behavioral policy as an illustration, we draw the implications of this revised understanding of mechanistic evidence for debates concerning policy extrapolation, evidence hierarchies, and evidence integration…(More)”.
The Risks of Dangerous Dashboards in Basic Education
Lant Pritchett at the Center for Global Development: “On June 1, 2009 Air France flight 447 from Rio de Janeiro to Paris crashed into the Atlantic Ocean killing all 228 people on board. While the Airbus 330 was flying on auto-pilot, the different speed indicators received by the on-board navigation computers started to give conflicting speeds, almost certainly because the pitot tubes responsible for measuring air speed had iced over. Since the auto-pilot could not resolve conflicting signals and hence did not know how fast the plane was actually going, it turned control of the plane over to the two first officers (the captain was out of the cockpit). Subsequent flight simulator trials replicating the conditions of the flight conclude that had the pilots done nothing at all everyone would have lived—nothing was actually wrong; only the indicators were faulty, not the actual speed. But, tragically, the pilots didn’t do nothing….
What is the connection to education?
Many countries’ systems of basic education are in “stall” condition.
A recent paper of Beatty et al. (2018) uses information from the Indonesia Family Life Survey, a representative household survey that has been carried out in several waves with the same individuals since 2000 and contains information on whether individuals can answer simple arithmetic questions. Figure 1, showing the relationship between the level of schooling and the probability of answering a typical question correctly, has two shocking results.
First, the difference in the likelihood a person can answer a simple mathematics question correctly differs by only 20 percent between individuals who have completed less than primary school (<PS)—who can answer correctly (adjusted for guessing) about 20 percent of the time—and those who have completed senior secondary school or more (>=SSS), who answer correctly only about 40 percent of the time. These are simple multiple choice questions like whether 56/84 is the same fraction as (can be reduced to) 2/3, and whether 1/3-1/6 equals 1/6. This means that in an entire year of schooling, less than 2 additional children per 100 gain the ability to answer simple arithmetic questions.
Second, this incredibly poor performance in 2000 got worse by 2014. …
What has this got to do with education dashboards? The way large bureaucracies prefer to work is to specify process compliance and inputs and then measure those as a means of driving performance. This logistical mode of managing an organization works best when both process compliance and inputs are easily “observable” in the economist’s sense of easily verifiable, contractible, adjudicated. This leads to attention to processes and inputs that are “thin” in the Clifford Geertz sense (adopted by James Scott as his primary definition of how a “high modern” bureaucracy and hence the state “sees” the world). So in education one would specify easily-observable inputs like textbook availability, class size, school infrastructure. Even if one were talking about “quality” of schooling, a large bureaucracy would want this too reduced to “thin” indicators, like the fraction of teachers with a given type of formal degree, or process compliance measures, like whether teachers were hired based on some formal assessment.
Those involved in schooling can then become obsessed with their dashboards and the “thin” progress that is being tracked and easily ignore the loud warning signals saying: Stall!…(More)”.
Mapping the Privacy-Utility Tradeoff in Mobile Phone Data for Development
Paper by Alejandro Noriega-Campero, Alex Rutherford, Oren Lederman, Yves A. de Montjoye, and Alex Pentland: “Today’s age of data holds high potential to enhance the way we pursue and monitor progress in the fields of development and humanitarian action. We study the relation between data utility and privacy risk in large-scale behavioral data, focusing on mobile phone metadata as paradigmatic domain. To measure utility, we survey experts about the value of mobile phone metadata at various spatial and temporal granularity levels. To measure privacy, we propose a formal and intuitive measure of reidentification risk—the information ratio—and compute it at each granularity level. Our results confirm the existence of a stark tradeoff between data utility and reidentifiability, where the most valuable datasets are also most prone to reidentification. When data is specified at ZIP-code and hourly levels, outside knowledge of only 7% of a person’s data suffices for reidentification and retrieval of the remaining 93%. In contrast, in the least valuable dataset, specified at municipality and daily levels, reidentification requires on average outside knowledge of 51%, or 31 data points, of a person’s data to retrieve the remaining 49%. Overall, our findings show that coarsening data directly erodes its value, and highlight the need for using data-coarsening, not as stand-alone mechanism, but in combination with data-sharing models that provide adjustable degrees of accountability and security….(More)”.
A rationale for data governance as an approach to tackle recurrent drawbacks in open data portals
Conference paper by Juan Ribeiro Reis et al: “Citizens and developers are gaining broad access to public data sources, made available in open data portals. These machine-readable datasets enable the creation of applications that help the population in several ways, giving them the opportunity to actively participate in governance processes, such as decision taking and policy-making.
While the number of open data portals grows over the years, researchers have been able to identify recurrent problems with the data they provide, such as lack of data standards, difficulty in data access and poor understandability. Such issues make difficult the effective use of data. Several works in literature propose different approaches to mitigate these issues, based on novel or well-known data management techniques.
However, there is a lack of general frameworks for tackling these problems. On the other hand, data governance has been applied in large companies to manage data problems, ensuring that data meets business needs and become organizational assets. In this paper, firstly, we highlight the main drawbacks pointed out in literature for government open data portals. Eventually, we bring around how data governance can tackle much of the issues identified…(More)”.