Paper by Roman Lukyanenko: “As crowdsourced user-generated content becomes an important source of data for organizations, a pressing question is how to ensure that data contributed by ordinary people outside of traditional organizational boundaries is of suitable quality to be useful for both known and unanticipated purposes. This research examines the impact of different information quality management strategies, and corresponding data collection design choices, on key dimensions of information quality in crowdsourced user-generated content. We conceptualize a contributor-centric information quality management approach focusing on instance-based data collection. We contrast it with the traditional consumer-centric fitness-for-use conceptualization of information quality that emphasizes class-based data collection. We present laboratory and field experiments conducted in a citizen science domain that demonstrate trade-offs between the quality dimensions of accuracy, completeness (including discoveries), and precision between the two information management approaches and their corresponding data collection designs. Specifically, we show that instance-based data collection results in higher accuracy, dataset completeness and number of discoveries, but this comes at the expense of lower precision. We further validate the practical value of the instance-based approach by conducting an applicability check with potential data consumers (scientists, in our context of citizen science). In a follow-up study, we show, using human experts and supervised machine learning techniques, that substantial precision gains on instance-based data can be achieved with post-processing. We conclude by discussing the benefits and limitations of different information quality and data collection design choice for information quality in crowdsourced user-generated content…(More)”.
Regulating New Tech: Problems, Pathways, and People
Paper by Cary Coglianese: “New technologies bring with them many promises, but also a series of new problems. Even though these problems are new, they are not unlike the types of problems that regulators have long addressed in other contexts. The lessons from regulation in the past can thus guide regulatory efforts today. Regulators must focus on understanding the problems they seek to address and the causal pathways that lead to these problems. Then they must undertake efforts to shape the behavior of those in industry so that private sector managers focus on their technologies’ problems and take actions to interrupt the causal pathways. This means that regulatory organizations need to strengthen their own technological capacities; however, they need most of all to build their human capital. Successful regulation of technological innovation rests with top quality people who possess the background and skills needed to understand new technologies and their problems….(More)”.
Technology and democracy: a paradox wrapped in a contradiction inside an irony
Paper by Stephan Lewandowsky and Peter Pomerantsev: “Democracy is in retreat around the globe. Many commentators have blamed the Internet for this development, whereas others have celebrated the Internet as a tool for liberation, with each opinion being buttressed by supporting evidence. We try to resolve this paradox by reviewing some of the pressure points that arise between human cognition and the online information architecture, and their fallout for the well-being of democracy. We focus on the role of the attention economy, which has monetised dwell time on platforms, and the role of algorithms that satisfy users’ presumed preferences. We further note the inherent asymmetry in power between platforms and users that arises from these pressure points, and we conclude by sketching out the principles of a new Internet with democratic credentials….(More)”.
The role of artificial intelligence in disinformation
Paper by Noémi Bontridder and Yves Poullet: “Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem….(More)”.
Evidence-Based Policymaking: What Human Service Agencies Can Learn from Implementation Science and Integrated Data Systems
Paper by Sharon Zanti & M. Lori Thomas: “The evidence-based policymaking movement compels government leaders and agencies to rely on the best available research evidence to inform policy and program decisions, yet how to do this effectively remains a challenge. This paper demonstrates how the core concepts from two emerging fields—Implementation Science (IS) and Integrated Data Systems (IDS)—can help human service agencies and their partners realize the aims of the evidence-based policymaking movement. An IS lens can help agencies address the role of context when implementing evidence-based practices, complement other quality and process improvement efforts, simultaneously study implementation and effectiveness outcomes, and guide de-implementation of ineffective policies. The IDS approach offers governance frameworks to support ethical and legal data use, provides high-quality administrative data for in-house analyses, and allows for more time-sensitive analyses of pressing agency needs. Ultimately, IS and IDS can support human service agencies in more efficiently using government resources to deliver the best available programs and policies to the communities they serve. Although this paper focuses on examples within the United States context, key concepts and guidance are intended to be broadly applicable across geographies, given that IS, IDS, and the evidence-based policymaking movement are globally relevant….(More)”.
Business Data Sharing through Data Marketplaces: A Systematic Literature Review
Paper by Abbas, Antragama E., Wirawan Agahari, Montijn van de Ven, Anneke Zuiderwijk, and Mark de Reuver: “Data marketplaces are expected to play a crucial role in tomorrow’s data economy, but such marketplaces are seldom commercially viable. Currently, there is no clear understanding of the knowledge gaps in data marketplace research, especially not of neglected research topics that may advance such marketplaces toward commercialization. This study provides an overview of the state-of-the-art of data marketplace research. We employ a Systematic Literature Review (SLR) approach to examine 133 academic articles and structure our analysis using the Service-Technology-Organization-Finance (STOF) model. We find that the extant data marketplace literature is primarily dominated by technical research, such as discussions about computational pricing and architecture. To move past the first stage of the platform’s lifecycle (i.e., platform design) to the second stage (i.e., platform adoption), we call for empirical research in non-technological areas, such as customer expected value and market segmentation….(More)”.
Creating and governing social value from data
Paper by Diane Coyle and Stephanie Diepeveen: “Data is increasingly recognised as an important economic resource for innovation and growth, but its innate characteristics mean market-based valuations inadequately account for the impact of its use on social welfare. This paper extends the literature on the value of data by providing a framework that takes into account its non-rival nature and integrates its inherent positive and negative externalities. Positive externalities consist of the scope for combining different data sets or enabling innovative uses of existing data, while negative externalities include potential privacy loss. We propose a framework integrating these and explore the policy trade-offs shaping net social welfare through a case study of geospatial data and the transport sector in the UK, where insufficient recognition of the trade-offs has contributed to suboptimal policy outcomes. We conclude by proposing methods for empirical approaches to social data valuation, essential evidence for decisions regarding the policy trade-offs . This article therefore lays important groundwork for novel approaches to the measurement of the net social welfare contribution of data, and hence illuminating opportunities for greater and more equitable creation of value from data in our societies….(More)”
Conceptualizing AI literacy: An exploratory review
Paper by Davy Tsz KitNg, Jac Ka LokLeung, Samuel K.W.Chu, and Maggie QiaoShen: “Artificial Intelligence (AI) has spread across industries (e.g., business, science, art, education) to enhance user experience, improve work efficiency, and create many future job opportunities. However, public understanding of AI technologies and how to define AI literacy is under-explored. This vision poses upcoming challenges for our next generation to learn about AI. On this note, an exploratory review was conducted to conceptualize the newly emerging concept “AI literacy”, in search for a sound theoretical foundation to define, teach and evaluate AI literacy. Grounded in literature on 30 existing peer-reviewed articles, this review proposed four aspects (i.e., know and understand, use, evaluate, and ethical issues) for fostering AI literacy based on the adaptation of classic literacies. This study sheds light on the consolidated definition, teaching, and ethical concerns on AI literacy, establishing the groundwork for future research such as competency development and assessment criteria on AI literacy….(More)”.
‘Anyway, the dashboard is dead’: On trying to build urban informatics
Paper by Jathan Sadowski: “How do the idealised promises and purposes of urban informatics compare to the material politics and practices of their implementation? To answer this question, I ethnographically trace the development of two data dashboards by strategic planners in an Australian city over the course of 2 years. By studying this techno-political process from its origins onward, I uncovered an interesting story of obdurate institutions, bureaucratic momentum, unexpected troubles, and, ultimately, frustration and failure. These kinds of stories, which often go untold in the annals of innovation, contrast starkly with more common framings of technological triumph and transformation. They also, I argue, reveal much more about how techno-political systems are actualised in the world…(More)”.
Public Crowdsourcing: Analyzing the Role of Government Feedback on Civic Digital Platforms
Paper by Lisa Schmidthuber, Dennis Hilgers, and Krithika Randhawa: “Government organizations increasingly use crowdsourcing platforms to interact with citizens and integrate their requests in designing and delivering public services. Government usually provides feedback to individual users on whether the request can be considered. Drawing on attribution theory, this study asks how the causal attributions of the government response affect continued participation in crowdsourcing platforms. To test our hypotheses, we use a 7-year dataset of both online requests from citizens to government and government responses to citizen requests. We focus on citizen requests that are denied by government, and find that stable and uncontrollable attributions of the government response have a negative effect on future participation behavior. Also, a local government’s locus of causality negatively affects continued participation. This study contributes to research on the role of responsiveness in digital interaction between citizens and government and highlights the importance of rationale transparency to sustain citizen participation…(More)”.