Paper by Benjamin H. Detenber: “The city state of Singapore has a long history of social engineering efforts, yet only recently have social scientists and civil servants started to use behavioural insights (BI) to create ‘nudges’ and integrate them into the daily lives of citizens. Colloquially known as a nanny state for its extensive social programmes and sometimes heavy-handed approach to guiding social behaviour, Singapore is often regarded favourably by its neighbours in terms of its cleanliness, efficiency, and productivity. Yet how it manages its populace and the restrictions it imposes on unwanted behaviours are sometimes viewed sceptically by others in Asia and the West. Thus, many in the Singapore Civil Service have come to see nudging as a less coercive way to promote social welfare and well-being. This article reviews some of the latest actions in three areas: finance, health, and the environment. In discussing the range of nudging practices, their effectiveness will be assessed and some of the implications for society and individuals will be addressed. To the extent that Singapore can be considered a bellwether or harbinger, its use of nudges may offer a glimpse of what lies ahead for other countries in the region….(More)”.
Paper by Stefan Wojcik et al: “While digital trace data from sources like search engines hold enormous potential for tracking and understanding human behavior, these streams of data lack information about the actual experiences of those individuals generating the data. Moreover, most current methods ignore or under-utilize human processing capabilities that allow humans to solve problems not yet solvable by computers (human computation). We demonstrate how behavioral research, linking digital and real-world behavior, along with human computation, can be utilized to improve the performance of studies using digital data streams. This study looks at the use of search data to track prevalence of Influenza-Like Illness (ILI). We build a behavioral model of flu search based on survey data linked to users’ online browsing data. We then utilize human computation for classifying search strings. Leveraging these resources, we construct a tracking model of ILI prevalence that outperforms strong historical benchmarks using only a limited stream of search data and lends itself to tracking ILI in smaller geographic units. While this paper only addresses searches related to ILI, the method we describe has potential for tracking a broad set of phenomena in near real-time….(More)”
Paper by Sabelo Mhlambi: “What is the measure of personhood and what does it mean for machines to exhibit human-like qualities and abilities? Furthermore, what are the human rights, economic, social, and political implications of using machines that are designed to reproduce human behavior and decision making? The question of personhood is one of the most fundamental questions in philosophy and it is at the core of the questions, and the quest, for an artificial or mechanical personhood.
The development of artificial intelligence has depended on the traditional Western view of personhood as rationality. However, the traditional view of rationality as the essence of personhood, designating how humans, and now machines, should model and approach the world, has always been marked by contradictions, exclusions, and inequality. It has shaped Western economic structures (capitalism’s free markets built on colonialism’s forced markets), political structures (modernity’s individualism imposed through coloniality), and discriminatory social hierarchies (racism and sexism as institutions embedded in enlightenment-era rationalized social and gender exclusions from full person status and economic, political, and social participation), which in turn shape the data, creation, and function of artificial intelligence. It is therefore unsurprising that the artificial intelligence industry reproduces these dehumanizations. Furthermore, the perceived rationality of machines obscures machine learning’s uncritical imitation of discriminatory patterns within its input data, and minimizes the role systematic inequalities play in harmful artificial intelligence outcomes….(More)”.
Paper by Manuel Morales et al: “COVID-19 testing, the cornerstone for effective screening and identification of COVID-19 cases, remains paramount as an intervention tool to curb the spread of COVID-19 both at local and national levels. However, the speed at which the pandemic struck and the response was rolled out, the widespread impact on healthcare infrastructure, the lack of sufficient preparation within the public health system, and the complexity of the crisis led to utter confusion among test-takers. Invasion of privacy remains a crucial concern. The user experience of test takers remains low. User friction affects user behavior and discourages participation in testing programs. Test efficacy has been overstated. Test results are poorly understood resulting in inappropriate follow-up recommendations. Herein, we review the current landscape of COVID-19 testing, identify four key challenges, and discuss the consequences of the failure to address these challenges. The current infrastructure around testing and information propagation is highly privacy-invasive and does not leverage scalable digital components. In this work, we discuss challenges complicating the existing covid-19 testing ecosystem and highlight the need to improve the testing experience for the user and reduce privacy invasions. Digital tools will play a critical role in resolving these challenges….(More)”.
Paper by Mahmood Jasim: “Local governments still depend on traditional town halls for community consultation, despite problems such as a lack of inclusive participation for attendees and difficulty for civic organizers to capture attendees’ feedback in reports. Building on a formative study with 66 town hall attendees and 20 organizers, we designed and developed CommunityClick, a community sourcing system that captures attendees’ feedback in an inclusive manner and enables organizers to author more comprehensive reports. During the meeting, in addition to recording meeting audio to capture vocal attendees’ feedback, we modify iClickers to give voice to reticent attendees by allowing them to provide real-time feedback beyond a binary signal. This information then automatically feeds into a meeting transcript augmented with attendees’ feedback and organizers’ tags. The augmented transcript along with a feedback-weighted summary of the transcript generated from text analysis methods is incorporated into an interactive authoring tool for organizers to write reports. From a field experiment at a town hall meeting, we demonstrate how CommunityClick can improve inclusivity by providing multiple avenues for attendees to share opinions. Additionally, interviews with eight expert organizers demonstrate CommunityClick’s utility in creating more comprehensive and accurate reports to inform critical civic decision-making. We discuss the possibility of integrating CommunityClick with town hall meetings in the future as well as expanding to other domains….(More)”.
Paper by Amy Kristin Sanders at the Journal of Civic Information: “As the U.S. has grappled with COVID-19, the government has resisted repeated requests to follow open records laws, which are essential to transparency. Current efforts to reduce access to death records and other public information amid the pandemic jeopardizes government accountability and undermines the public’s trust. Given that COVID-19 has disproportionately affected low-income Americans, incarcerated populations and people of color, access to government-held data has serious implications for social justice. Importantly, those goals can be met without violating personal privacy. After analyzing state open records laws, court decisions and attorney general opinions, the author has developed a set of best practices for advocating access to death records to provide journalists and government watchdogs with important public health information that’s squarely in the public interest….(More)”.
Paper by Linus Dahlander and Henning Piezunka: ” Crowdsourcing—asking an undefned group of external contributors to work on tasks—allows organizations to tap into the expertise of people around the world. Crowdsourcing is known to increase innovation and loyalty to brands, but many organizations struggle to leverage its potential, as our research shows. Most often this is because organizations fail to properly plan for all the diferent stages of crowd engagement. In this paper, we use several examples to explain these challenges and ofer advice for how organizations can overcome them….(More)”.
Paper by Teresa M. Harrison and Luis Felipe Luna-Reyes: “While there is growing consensus that the analytical and cognitive tools of artificial intelligence (AI) have the potential to transform government in positive ways, it is also clear that AI challenges traditional government decision-making processes and threatens the democratic values within which they are framed. These conditions argue for conservative approaches to AI that focus on cultivating and sustaining public trust. We use the extended Brunswik lens model as a framework to illustrate the distinctions between policy analysis and decision making as we have traditionally understood and practiced them and how they are evolving in the current AI context along with the challenges this poses for the use of trustworthy AI. We offer a set of recommendations for practices, processes, and governance structures in government to provide for trust in AI and suggest lines of research that support them….(More)”.
Introduction to Special Issue of the Journal of Representative Democracy by Alice el-Wakil & Spencer McKay: “Despite controversy over recent referendums and initiatives, populists and social movements continue to call for the use of these popular vote processes. Most political and academic debates about whether these calls should be answered have adopted a dominant framework that focuses on whether we should favour ‘direct’ or ‘representative’ democracy. However, this framework obscures more urgent questions about whether, when, and how popular vote processes should be implemented in democratic systems. How do popular vote processes interact with representative institutions? And how could these interactions be democratized? The contributions in this special issue address these and related questions by replacing the framework of ‘direct democracy’ with systemic approaches. The normative contributions illustrate how these approaches enable the development of counternarratives about the value of popular vote processes and clarify the nature of the underlying ideals they should realize. The empirical contributions examine recent cases with a variety of methodological tools, demonstrating that systemic approaches attentive to context can generate new insights about the use of popular vote processes. This introduction puts these contributions into conversation to illustrate how a shift in approach establishes a basis for (re-)evaluating existing practices and guiding reforms so that referendums and initiatives foster democracy….(More)”.
Paper by Teruaki Hayashi et al: “In recent years, rather than enclosing data within a single organization, exchanging and combining data from different domains has become an emerging practice. Many studies have discussed the economic and utility value of data and data exchange, but the characteristics of data that contribute to problem solving through data combination have not been fully understood. In big data and interdisciplinary data combinations, large-scale data with many variables are expected to be used, and value is expected to be created by combining data as much as possible. In this study, we conduct three experiments to investigate the characteristics of data, focusing on the relationships between data combinations and variables in each dataset, using empirical data shared by the local government. The results indicate that even datasets that have a few variables are frequently used to propose solutions for problem solving. Moreover, we found that even if the datasets in the solution do not have common variables, there are some well-established solutions to the problems. The findings of this study shed light on mechanisms behind data combination for problem-solving involving multiple datasets and variables…(More)”.