Introduction to Special Issue of Cultural Analytics by Amelia Acker and Tanya Clement: “Data have become pervasive in research in the humanities and the social sciences. New areas, objects, and situations for study have developed; and new methods for working with data are shepherded by new epistemologies and (potential) paradigm shifts. But data didn’t just happen to us. We have happened to data. In every field, scholars are drawing boundaries between data and humans as if making meaning with data is innocent work. But these boundaries are never innocent. Questions are emerging about the relationships of culture to data—urgent questions that focus on the codification (or code-ification) of social and cultural bias and the erosion of human agency, subjectivity, and identity.
For this special issue of Cultural Analytics we invited submissions to respond to these concerns as they relate to the proximity and distance between the creation of data and its collection; the nature of data as object or content; modes and contexts of data circulation, dissemination and preservation; histories and imaginary data futures; data expertise; data and technological progressivism; the cultivation and standardization of data; and the cultures, communities, and consciousness of data production. The contributions we received ranged in type from research or theory articles to data reviews and opinion pieces responding to the theme of “data cultures”. Each contribution asks questions we should all be asking: What is the role we play in the data cultures/culture as data we form around sociomaterial practices? How can we better understand how these practices effect, and affect, the materialization of subjects, objects, and the relations between them? How can we engage our data culture(s) in practical, critical, and generative ways? As Karen Barad writes, “We are responsible for the world in which we live not because it is an arbitrary construction of our choosing, but because it is sedimented out of particular practices that we have a role in shaping.”1Ultimately, our contributors are focused on this central concern: where is our agency in the responsibility of shaping data cultures? What role can scholarship play in better understanding our culture as data?…(More)”.
Paper by Lucia Savage, Martin Gaynor and Julie Adler-Milstein: “There are obvious benefits to having patients’ health information flow across health providers. Providers will have more complete information about patients’ health and treatment histories, allowing them to make better treatment recommendations, and avoid unnecessary and duplicative testing or treatment. This should result in better and more efficient treatment, and better health outcomes. Moreover, the federal government has provided substantial incentives for the exchange of health information. Since 2009, the federal government has spent more than $40 billion to ensure that most physicians and hospitals use electronic health records, and to incentivize the use of electronic health information and health information exchange (the enabling statute is the Health Information Technology for Clinical Health Act), and in 2016 authorized substantial fines for failing to share appropriate information.
Yet, in spite of these incentives and the clear benefits to patients, the exchange of health information remains limited. There is evidence that this limited exchange in due in part to providers and platforms attempting to retain, rather than share, information (“information blocking”). In this article we examine legal and business reasons why health information may not be flowing. In particular, we discuss incentives providers and platforms can have for information blocking as a means to maintain or enhance their market position and thwart competition. Finally, we recommend steps to better understand whether the absence of information exchange, is due to information blocking that harms competition and consumers….(More)”
Justin G. Schuetz and Alison Johnston at PNAS: “Efforts to mitigate the current biodiversity crisis require a better understanding of how and why humans value other species. We use Internet query data and citizen science data to characterize public interest in 621 bird species across the United States. We estimate the relative popularity of different birds by quantifying how frequently people use Google to search for species, relative to the rates at which they are encountered in the environment.
In intraspecific analyses, we also quantify the degree to which Google searches are limited to, or extend beyond, the places in which people encounter each species. The resulting metrics of popularity and geographic specificity of interest allow us to define aspects of relationships between people and birds within a cultural niche space. We then estimate the influence of species traits and socially constructed labels on niche positions to assess the importance of observations and ideas in shaping public interest in birds.
Our analyses show clear effects of migratory strategy, color, degree of association with bird feeders, and, especially, body size on niche position. They also indicate that cultural labels, including “endangered,” “introduced,” and, especially, “team mascot,” are strongly associated with the magnitude and geographic specificity of public interest in birds. Our results provide a framework for exploring complex relationships between humans and other species and enable more informed decision-making across diverse bird conservation strategies and goals….(More)”.
Paper by Juliane Jarke: “The purpose of this paper is to review interventions/methods for engaging older adults in meaningful digital public service design by enabling them to engage critically and productively with open data and civic tech.
The paper evaluates data walks as a method for engaging non-tech-savvy citizens in co-design work. These were evaluated along a framework considering how such interventions allow for sharing control (e.g. over design decisions), sharing expertise and enabling change.
Within a co-creation project, different types of data walks may be conducted, including ideation walks, data co-creation walks or user test walks. These complement each other with respect to how they facilitate the sharing of control and expertise, and enable change for a variety of older citizens.
Data walks are a method with a low-threshold, potentially enabling a variety of citizens to engage in co-design activities relating to open government and civic tech.
Such methods address the digital divide and further social participation of non-tech-savvy citizens. They value the resources and expertise of older adults as co-designers and partners, and counter stereotypical ideas about age and ageing….(More)”.
Paper by Ivo D Dinov et al: “The UK Biobank is a rich national health resource that provides enormous opportunities for international researchers to examine, model, and analyze census-like multisource healthcare data. The archive presents several challenges related to aggregation and harmonization of complex data elements, feature heterogeneity and salience, and health analytics. Using 7,614 imaging, clinical, and phenotypic features of 9,914 subjects we performed deep computed phenotyping using unsupervised clustering and derived two distinct sub-cohorts. Using parametric and nonparametric tests, we determined the top 20 most salient features contributing to the cluster separation. Our approach generated decision rules to predict the presence and progression of depression or other mental illnesses by jointly representing and modeling the significant clinical and demographic variables along with the derived salient neuroimaging features. We reported consistency and reliability measures of the derived computed phenotypes and the top salient imaging biomarkers that contributed to the unsupervised clustering. This clinical decision support system identified and utilized holistically the most critical biomarkers for predicting mental health, e.g., depression. External validation of this technique on different populations may lead to reducing healthcare expenses and improving the processes of diagnosis, forecasting, and tracking of normal and pathological aging….(More)”.
Paper by Hannah Bloch-Wehba: “Federal, state, and local governments increasingly depend on automated systems — often procured from the private sector — to make key decisions about civil rights and civil liberties. When individuals affected by these decisions seek access to information about the algorithmic methodologies that produced them, governments frequently assert that this information is proprietary and cannot be disclosed.
Recognizing that opaque algorithmic governance poses a threat to civil rights and liberties, scholars have called for a renewed focus on transparency and accountability for automated decision making. But scholars have neglected a critical avenue for promoting public accountability and transparency for automated decision making: the law of access to government records and proceedings. This Article fills this gap in the literature, recognizing that the Freedom of Information Act, its state equivalents, and the First Amendment provide unappreciated legal support for algorithmic transparency.
The law of access performs three critical functions in promoting algorithmic accountability and transparency. First, by enabling any individual to challenge algorithmic opacity in government records and proceedings, the law of access can relieve some of the burden otherwise borne by parties who are often poor and under-resourced. Second, access law calls into question government’s procurement of algorithmic decision making technologies from private vendors, subject to contracts that include sweeping protections for trade secrets and intellectual property rights. Finally, the law of access can promote an urgently needed public debate on algorithmic governance in the public sector….(More)”.
Introduction to Special Issue of Politics and Governance by Sarah Giest and Reuben Ng: ” Recent literature has been trying to grasp the extent as to which big data applications affect the governance and policymaking of countries and regions (Boyd & Crawford, 2012; Giest, 2017; Höchtl, Parycek, & Schöllhammer, 2015; Poel, Meyer, & Schroeder, 2018). The discussion includes the comparison to e-government and evidence-based policymaking developments that existed long before the idea of big data entered the policy realm. The theoretical extent of this discussion however lacks some of the more practical consequences that come with the active use of data-driven applications. In fact, much of the work focuses on the input-side of policymaking, looking at which data and technology enters the policy process, however very little is dedicated to the output side.
In short, how has big data shaped data governance and policymaking? The contributions to this thematic issue shed light on this question by looking at a range of factors, such as campaigning in the US election (Trish, 2018) or local government data projects (Durrant, Barnett, & Rempel, 2018). The goal is to unpack the mixture of big data applications and existing policy processes in order to understand whether these new tools and applications enhance or hinder policymaking….(More)”.
Paper by Hanna C Norberg: “Blockchain technology is still in its infancy, but already it has begun to revolutionize global trade. Its lure is irresistible because of the simplicity with which it can replace the standard methods of documentation, smooth out logistics, increase transparency, speed up transactions, and ameliorate the planning and tracking of trade.
Blockchain essentially provides the supply chain with an unalterable ledger of verified transactions, and thus enables trust every step of the way through the trade process. Every stakeholder involved in that process – from producer to warehouse worker to shipper to financial institution to recipient at the final destination – can trust that the information contained in that indelible ledger is accurate. Fraud will no longer be an issue, middlemen can be eliminated, shipments tracked, quality control maintained to highest standards and consumers can make decisions based on more than the price. Blockchain dramatically reduces the amount of paperwork involved, along with the myriad of agents typically involved in the process, all of this resulting in soaring efficiencies. Making the most of this new technology, however, requires solid policy. Most people have only a vague idea of what blockchain is. There needs to be a basic understanding of what blockchain can and can’t do, and how it works in the economy and in trade. Once they become familiar with the technology, policy-makers must move on to thinking about what technological issues could be mitigated, solved or improved.
Governments need to explore blockchain’s potential through its use in public-sector projects that demonstrate its workings, its potential and its inevitable limitations. Although blockchain is not nearly as evolved now as the internet was in 2005, co-operation among all stakeholders on issues like taxonomy or policy guides on basic principles is crucial. Those stakeholders include government, industry, academia and civil society. All this must be done while keeping in mind the global nature of blockchain and that blockchain regulations need to be made in synch with regulations on other issues are adjacent to the technology, such as electronic signatures. However, work can be done in the global arena through international initiatives and organizations such as the ISO….(More)”.
NBER Paper by Daron Acemoglu and Pascual Restrepo: “Artificial Intelligence is set to influence every aspect of our lives, not least the way production is organized. AI, as a technology platform, can automate tasks previously performed by labor or create new tasks and activities in which humans can be productively employed. Recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality and lower productivity growth. The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the “right” kind of AI with better economic and social outcomes….(More)”.
Paper by Abigail Devereaux: “Augmented and virtual reality, whose ubiquitous convergence is known as extended reality (XR), are technologies that imbue a user’s apparent surroundings with some degree of virtuality. In this article, we are interested in how social entrepreneurs might utilize innovative technological methods in XR to solve social problems presented by XR. Social entrepreneurship in XR presents novel challenges and opportunities not present in traditional regulatory spaces, as XR changes the environment in which choices are made.
Furthermore, the challenges presented by rapidly advancing XR may require much more agile forms of governance than are available from public institutions, even under widespread algorithmic governance. Social entrepreneurship in blockchain solutions may very well be able to meet some of these challenges, as we show. Thus, we expect a new infrastructure to arise to address challenges presented by XR, built by social entrepreneurs in XR, and that may eventually be used as an alternative to public instantiations of governance. Our central thesis is that the dynamic, immersive, and agile nature of XR both provides an unusually fertile ground for the development of alternative forms of governance and essentially necessitates this development by contrast with relatively inagile institutions of public governance….(More)”.