Open-access reformers launch next bold publishing plan


Article by Layal Liverpool: “The group behind the radical open-access initiative Plan S has announced its next big plan to shake up research publishing — and this one could be bolder than the first. It wants all versions of an article and its associated peer-review reports to be published openly from the outset, without authors paying any fees, and for authors, rather than publishers, to decide when and where to first publish their work.

The group of influential funding agencies, called cOAlition S, has over the past five years already caused upheaval in the scholarly publishing world by pressuring more journals to allow immediate open-access publishing. Its new proposal, prepared by a working group of publishing specialists and released on 31 October, puts forward an even broader transformation in the dissemination of research.

It outlines a future “community-based” and “scholar-led” open-research communication system (see go.nature.com/45zyjh) in which publishers are no longer gatekeepers that reject submitted work or determine first publication dates. Instead, authors would decide when and where to publish the initial accounts of their findings, both before and after peer review. Publishers would become service providers, paid to conduct processes such as copy-editing, typesetting and handling manuscript submissions…(More)”.

Your Face Belongs to Us


Book by Kashmir Hill: “… was skeptical when she got a tip about a mysterious app called Clearview AI that claimed it could, with 99 percent accuracy, identify anyone based on just one snapshot of their face. The app could supposedly scan a face and, in just seconds, surface every detail of a person’s online life: their name, social media profiles, friends and family members, home address, and photos that they might not have even known existed. If it was everything it claimed to be, it would be the ultimate surveillance tool, and it would open the door to everything from stalking to totalitarian state control. Could it be true?

In this riveting account, Hill tracks the improbable rise of Clearview AI, helmed by Hoan Ton-That, an Australian computer engineer, and Richard Schwartz, a former Rudy Giuliani advisor, and its astounding collection of billions of faces from the internet. The company was boosted by a cast of controversial characters, including conservative provocateur Charles C. Johnson and billionaire Donald Trump backer Peter Thiel—who all seemed eager to release this society-altering technology on the public. Google and Facebook decided that a tool to identify strangers was too radical to release, but Clearview forged ahead, sharing the app with private investors, pitching it to businesses, and offering it to thousands of law enforcement agencies around the world.
      
Facial recognition technology has been quietly growing more powerful for decades. This technology has already been used in wrongful arrests in the United States. Unregulated, it could expand the reach of policing, as it has in China and Russia, to a terrifying, dystopian level.
     
Your Face Belongs to Us
 is a gripping true story about the rise of a technological superpower and an urgent warning that, in the absence of vigilance and government regulation, Clearview AI is one of many new technologies that challenge what Supreme Court Justice Louis Brandeis once called “the right to be let alone.”…(More)”.

Choosing AI’s Impact on the Future of Work 


Article by Daron Acemoglu & Simon Johnson  …“Too many commentators see the path of technology as inevitable. But the historical record is clear: technologies develop according to the vision and choices of those in positions of power. As we document in Power and Progress: Our 1,000-Year Struggle over Technology and Prosperity, when these choices are left entirely in the hands of a small elite, you should expect that group to receive most of the benefits, while everyone else bears the costs—potentially for a long time.

Rapid advances in AI threaten to eliminate many jobs, and not just those of writers and actors. Jobs with routine elements, such as in regulatory compliance or clerical work, and those that involve simple data collection, data summary, and writing tasks are likely to disappear.

But there are still two distinct paths that this AI revolution could take. One is the path of automation, based on the idea that AI’s role is to perform tasks as well as or better than people. Currently, this vision dominates in the US tech sector, where Microsoft and Google (and their ecosystems) are cranking hard to create new AI applications that can take over as many human tasks as possible.

The negative impact on people along the “just automate” path is easy to predict from prior waves of digital technologies and robotics. It was these earlier forms of automation that contributed to the decline of American manufacturing employment and the huge increase in inequality over the last four decades. If AI intensifies automation, we are very likely to get more of the same—a gap between capital and labor, more inequality between the professional class and the rest of the workers, and fewer good jobs in the economy….(More)”

Automating Empathy 


Open Access Book by Andrew McStay: “We live in a world where artificial intelligence and intensive use of personal data has become normalized. Companies across the world are developing and launching technologies to infer and interact with emotions, mental states, and human conditions. However, the methods and means of mediating information about people and their emotional states are incomplete and problematic.

Automating Empathy offers a critical exploration of technologies that sense intimate dimensions of human life and the modern ethical questions raised by attempts to perform and simulate empathy. It traces the ascendance of empathic technologies from their origins in physiognomy and pathognomy to the modern day and explores technologies in nations with non-Western ethical histories and approaches to emotion, such as Japan. The book examines applications of empathic technologies across sectors such as education, policing, and transportation, and considers key questions of everyday use such as the integration of human-state sensing in mixed reality, the use of neurotechnologies, and the moral limits of using data gleaned through automated empathy. Ultimately, Automating Empathy outlines the key principles necessary to usher in a future where automated empathy can serve and do good…(More)”

A standardised differential privacy framework for epidemiological modeling with mobile phone data


Paper by Merveille Koissi Savi et al: “During the COVID-19 pandemic, the use of mobile phone data for monitoring human mobility patterns has become increasingly common, both to study the impact of travel restrictions on population movement and epidemiological modeling. Despite the importance of these data, the use of location information to guide public policy can raise issues of privacy and ethical use. Studies have shown that simple aggregation does not protect the privacy of an individual, and there are no universal standards for aggregation that guarantee anonymity. Newer methods, such as differential privacy, can provide statistically verifiable protection against identifiability but have been largely untested as inputs for compartment models used in infectious disease epidemiology. Our study examines the application of differential privacy as an anonymisation tool in epidemiological models, studying the impact of adding quantifiable statistical noise to mobile phone-based location data on the bias of ten common epidemiological metrics. We find that many epidemiological metrics are preserved and remain close to their non-private values when the true noise state is less than 20, in a count transition matrix, which corresponds to a privacy-less parameter ϵ = 0.05 per release. We show that differential privacy offers a robust approach to preserving individual privacy in mobility data while providing useful population-level insights for public health. Importantly, we have built a modular software pipeline to facilitate the replication and expansion of our framework…(More)”.

Data Equity: Foundational Concepts for Generative AI


WEF Report: “This briefing paper focuses on data equity within foundation models, both in terms of the impact of Generative AI (genAI) on society and on the further development of genAI tools.

GenAI promises immense potential to drive digital and social innovation, such as improving efficiency, enhancing creativity and augmenting existing data. GenAI has the potential to democratize access and usage of technologies. However, left unchecked, it could deepen inequities. With the advent of genAI significantly increasing the rate at which AI is deployed and developed, exploring frameworks for data equity is more urgent than ever.

The goals of the briefing paper are threefold: to establish a shared vocabulary to facilitate collaboration and dialogue; to scope initial concerns to establish a framework for inquiry on which stakeholders can focus; and to shape future development of promising technologies.

The paper represents a first step in exploring and promoting data equity in the context of genAI. The proposed definitions, framework and recommendations are intended to proactively shape the development of promising genAI technologies…(More)”.

Artificial intelligence in government: Concepts, standards, and a unified framework


Paper by Vincent J. Straub, Deborah Morgan, Jonathan Bright, Helen Margetts: “Recent advances in artificial intelligence (AI), especially in generative language modelling, hold the promise of transforming government. Given the advanced capabilities of new AI systems, it is critical that these are embedded using standard operational procedures, clear epistemic criteria, and behave in alignment with the normative expectations of society. Scholars in multiple domains have subsequently begun to conceptualize the different forms that AI applications may take, highlighting both their potential benefits and pitfalls. However, the literature remains fragmented, with researchers in social science disciplines like public administration and political science, and the fast-moving fields of AI, ML, and robotics, all developing concepts in relative isolation. Although there are calls to formalize the emerging study of AI in government, a balanced account that captures the full depth of theoretical perspectives needed to understand the consequences of embedding AI into a public sector context is lacking. Here, we unify efforts across social and technical disciplines by first conducting an integrative literature review to identify and cluster 69 key terms that frequently co-occur in the multidisciplinary study of AI. We then build on the results of this bibliometric analysis to propose three new multifaceted concepts for understanding and analysing AI-based systems for government (AI-GOV) in a more unified way: (1) operational fitness, (2) epistemic alignment, and (3) normative divergence. Finally, we put these concepts to work by using them as dimensions in a conceptual typology of AI-GOV and connecting each with emerging AI technical measurement standards to encourage operationalization, foster cross-disciplinary dialogue, and stimulate debate among those aiming to rethink government with AI…(More)”.

A Feasibility Study of Differentially Private Summary Statistics and Regression Analyses with Evaluations on Administrative and Survey Data


Report by Andrés F. Barrientos, Aaron R. Williams, Joshua Snoke, Claire McKay Bowen: “Federal administrative data, such as tax data, are invaluable for research, but because of privacy concerns, access to these data is typically limited to select agencies and a few individuals. An alternative to sharing microlevel data is to allow individuals to query statistics without directly accessing the confidential data. This paper studies the feasibility of using differentially private (DP) methods to make certain queries while preserving privacy. We also include new methodological adaptations to existing DP regression methods for using new data types and returning standard error estimates. We define feasibility as the impact of DP methods on analyses for making public policy decisions and the queries accuracy according to several utility metrics. We evaluate the methods using Internal Revenue Service data and public-use Current Population Survey data and identify how specific data features might challenge some of these methods. Our findings show that DP methods are feasible for simple, univariate statistics but struggle to produce accurate regression estimates and confidence intervals. To the best of our knowledge, this is the first comprehensive statistical study of DP regression methodology on real, complex datasets, and the findings have significant implications for the direction of a growing research field and public policy…(More)”.

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence


The White House: “Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI…(More)”.

AI is Like… A Literature Review of AI Metaphors and Why They Matter for Policy


Paper by Matthijs M. Maas: “As AI systems have become increasingly capable and impactful, there has been significant public and policymaker debate over this technology’s impacts—and the appropriate legal or regulatory responses. Within these debates many have deployed—and contested—a dazzling range of analogies, metaphors, and comparisons for AI systems, their impact, or their regulation.

This report reviews why and how metaphors matter to both the study and practice of AI governance, in order to contribute to more productive dialogue and more reflective policymaking. It first reviews five stages at which different foundational metaphors play a role in shaping the processes of technological innovation, the academic study of their impacts; the regulatory agenda, the terms of the policymaking process, and legislative and judicial responses to new technology. It then surveys a series of cases where the choice of analogy materially influenced the regulation of internet issues, as well as (recent) AI law issues. The report then provides a non-exhaustive survey of 55 analogies that have been given for AI technology, and some of their policy implications. Finally, it discusses the risks of utilizing unreflexive analogies in AI law and regulation.

By disentangling the role of metaphors and frames in these debates, and the space of analogies for AI, this survey does not aim to argue against the use or role of analogies in AI regulation—but rather to facilitate more reflective and productive conversations on these timely challenges…(More)”.