Language and the Rise of the Algorithm


Book by Jeffrey M. Binder: “Bringing together the histories of mathematics, computer science, and linguistic thought, Language and the Rise of the Algorithm reveals how recent developments in artificial intelligence are reopening an issue that troubled mathematicians well before the computer age: How do you draw the line between computational rules and the complexities of making systems comprehensible to people? By attending to this question, we come to see that the modern idea of the algorithm is implicated in a long history of attempts to maintain a disciplinary boundary separating technical knowledge from the languages people speak day to day.
 
Here Jeffrey M. Binder offers a compelling tour of four visions of universal computation that addressed this issue in very different ways: G. W. Leibniz’s calculus ratiocinator; a universal algebra scheme Nicolas de Condorcet designed during the French Revolution; George Boole’s nineteenth-century logic system; and the early programming language ALGOL, short for algorithmic language. These episodes show that symbolic computation has repeatedly become entangled in debates about the nature of communication. Machine learning, in its increasing dependence on words, erodes the line between technical and everyday language, revealing the urgent stakes underlying this boundary.
 
The idea of the algorithm is a levee holding back the social complexity of language, and it is about to break. This book is about the flood that inspired its construction…(More)”.

Research Methods in Deliberative Democracy


Book edited by Selen A. Ercan et al: “… brings together a wide range of methods used in the study of deliberative democracy. It offers thirty-one different methods that scholars use for theorizing, measuring, exploring, or applying deliberative democracy. Each chapter presents one method by explaining its utility in deliberative democracy research and providing guidance on its application by drawing on examples from previous studies. The book hopes to inspire scholars to undertake methodologically robust, intellectually creative, and politically relevant research. It fills a significant gap in a rapidly growing field of research by assembling diverse methods and thereby expanding the range of methodological choices available to students, scholars, and practitioners of deliberative democracy…(More)”.

The Wireless Body


Article by Jeremy Greene: “Nearly half the US adult population will pass out at some point in their lives. Doctors call this “syncope,” and it is bread-and-butter practice for any emergency room or urgent care clinic. While most cases are benign—a symptom of dehydration or mistimed medication—syncope can also be a sign of something gone terribly wrong. It may be a symptom of a heart attack, a blood clot in the lungs, an embolus to the arteries supplying the brain, or a life-threatening arrhythmia. After a series of tests ruling out the worst, most patients go home without incident. Many of them also go home with a Holter monitor. 

The Holter monitor is a device about the size of a pack of cards that records the electrical activity of the heart over the course of a day or more. Since its invention more than half a century ago, it has become such a common object in clinical medicine that few pause to consider its origins. But, as the makers of new Wi-Fi and cloud-enabled devices, smartphone apps, and other “wearable” technologies claim to be revolutionizing the world of preventive health care, there is much to learn from the history of this older instrument of medical surveillance…(More)”.

Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constants


Book by James Vincent: “From the cubit to the kilogram, the humble inch to the speed of light, measurement is a powerful tool that humans invented to make sense of the world. In this revelatory work of science and social history, James Vincent dives into its hidden world, taking readers from ancient Egypt, where measuring the annual depth of the Nile was an essential task, to the intellectual origins of the metric system in the French Revolution, and from the surprisingly animated rivalry between metric and imperial, to our current age of the “quantified self.” At every turn, Vincent is keenly attuned to the political consequences of measurement, exploring how it has also been used as a tool for oppression and control.

Beyond Measure reveals how measurement is not only deeply entwined with our experience of the world, but also how its history encompasses and shapes the human quest for knowledge…(More)”.

Science in Negotiation


Book by Jessica Espey on “The Role of Scientific Evidence in Shaping the United Nations Sustainable Development Goals, 2012-2015”: “This book explores the role of scientific evidence within United Nations (UN) deliberation by examining the negotiation of the Sustainable Development Goals (SDGs), endorsed by Member States in 2015. Using the SDGs as a case study, this book addresses a key gap in our understanding of the role of evidence in contemporary international policy-making. It is structured around three overarching questions: (1) how does scientific evidence influence multilateral policy development within the UN General Assembly? (2) how did evidence shape the goals and targets that constitute the SDGs?; and (3) how did institutional arrangements and non-state actor engagements mediate the evidence-to-policy process in the development of the SDGs? The ultimate intention is to tease out lessons on global policy-making and to understand the influence of different evidence inputs and institutional factors in shaping outcomes.

To understand the value afforded to scientific evidence within multilateral deliberation, a conceptual framework is provided drawing upon literature from policy studies and political science, including recent theories of evidence-informed policy-making and new institutionalism. It posits that the success or failure of evidence informing global political processes rests upon the representation and access of scientific stakeholders, levels of community organisation, the framing and presentation of evidence, and time, including the duration over which evidence and key conceptual ideas are presented. Cutting across the discussion is the fundamental question of whose evidence counts and how expertise is defined? The framework is tested with specific reference to three themes that were prominent during the SDG negotiation process; public health (articulated in SDG 3), urban sustainability (articulated in SDG 11), and data and information systems (which were a cross-cutting theme of the dialogue). Within each, scientific communities had specific demands and through an exploration of key literature, including evidence inputs and UN documentation, as well as through key informant interviews, the translation of these scientific ideas into policy priorities is uncovered…(More)”.

Operationalizing Digital Self Determination


Paper by Stefaan G. Verhulst: “We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (e.g., consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required.
DSD is based on existing concepts of self-determination, as articulated in sources as varied as Kantian philosophy and the 1966 International Covenant on Economic, Social and Cultural Rights. Updated for the digital age, DSD contains several key characteristics, including the fact that it has both an individual and collective dimension; is designed to especially benefit vulnerable and marginalized groups; and is context-specific (yet also enforceable). Operationalizing DSD in this (and other) contexts so as to maximize the potential of data while limiting its harms requires a number of steps. In particular, a responsible operationalization of DSD would consider four key prongs or categories of action: processes, people and organizations, policies, and products and technologies…(More)”.

The Socio-Legal Lab: An Experiential Approach to Research on Law in Action


Guide by Siddharth Peter de Souza and Lisa Hahn: “..interactive workbook for socio-legal research projects. It employs the idea of a “lab” as a space for interactive and experiential learning. As an introductory book, it addresses researchers of all levels who are beginning to explore interdisciplinary research on law and are looking for guidance on how to do so. Likewise, the book can be used by teachers and peer groups to experiment with teaching and thinking about law in action through lab-based learning…

The book covers themes and questions that may arise during a socio-legal research project. This starts with examining what research and interdisciplinarity mean and in which forms they can be practiced. After an overview of the research process, we will discuss how research in action is often unpredictable and messy. Thus, the practical and ethical challenges of doing research will be discussed along with processes of knowledge production and assumptions that we have as researchers. 

Conducting a socio-legal research project further requires an overview of the theoretical landscape. We will introduce general debates about the nature, functions, and effects of law in society. Further, common dichotomies in socio-legal research such as “law” and “the social” or “qualitative” and “quantitative” research will be explored, along with suggested ways on how to bridge them. 

Turning to the application side of socio-legal research, the book delves deeper into questions of data on law and society, where to collect it and how to deal with it in a reflexive manner. It discusses different methods of qualitative socio-legal research and offers ways in which they can be experienced through exercises and simulations. In the research process, generating research results is followed by publishing and communicating them. We will explore different ways to ensure the outreach and impact of one’s research by communicating results through journals, blogs or social media. Finally, the book also discusses academia as a social space and the value of creating and using networks and peer groups for mutual support.

Overall, the workbook is designed to accompany and inspire researchers on their way through a socio-legal research project and to empower the reader into thinking more creatively about their methods, while at the same time demystifying them…(More)”.

Data Analysis for Social Science: A Friendly and Practical Introduction


Book by Elena Llaudet and Kosuke Imai: “…provides a friendly introduction to the statistical concepts and programming skills needed to conduct and evaluate social scientific studies. Using plain language and assuming no prior knowledge of statistics and coding, the book provides a step-by-step guide to analyzing real-world data with the statistical program R for the purpose of answering a wide range of substantive social science questions. It teaches not only how to perform the analyses but also how to interpret results and identify strengths and limitations. This one-of-a-kind textbook includes supplemental materials to accommodate students with minimal knowledge of math and clearly identifies sections with more advanced material so that readers can skip them if they so choose…(More)”.

We could run out of data to train AI language programs 


Article by Tammy Xu: “Large language models are one of the hottest areas of AI research right now, with companies racing to release programs like GPT-3 that can write impressively coherent articles and even computer code. But there’s a problem looming on the horizon, according to a team of AI forecasters: we might run out of data to train them on.

Language models are trained using texts from sources like Wikipedia, news articles, scientific papers, and books. In recent years, the trend has been to train these models on more and more data in the hope that it’ll make them more accurate and versatile.

The trouble is, the types of data typically used for training language models may be used up in the near future—as early as 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization, that is yet to be peer reviewed. The issue stems from the fact that, as researchers build more powerful models with greater capabilities, they have to find ever more texts to train them on. Large language model researchers are increasingly concerned that they are going to run out of this sort of data, says Teven Le Scao, a researcher at AI company Hugging Face, who was not involved in Epoch’s work.

The issue stems partly from the fact that language AI researchers filter the data they use to train models into two categories: high quality and low quality. The line between the two categories can be fuzzy, says Pablo Villalobos, a staff researcher at Epoch and the lead author of the paper, but text from the former is viewed as better-written and is often produced by professional writers…(More)”.

How many yottabytes in a quettabyte? Extreme numbers get new names


Article by Elizabeth Gibney: “By the 2030s, the world will generate around a yottabyte of data per year — that’s 1024 bytes, or the amount that would fit on DVDs stacked all the way to Mars. Now, the booming growth of the data sphere has prompted the governors of the metric system to agree on new prefixes beyond that magnitude, to describe the outrageously big and small.

Representatives from governments worldwide, meeting at the General Conference on Weights and Measures (CGPM) outside Paris on 18 November, voted to introduce four new prefixes to the International System of Units (SI) with immediate effect. The prefixes ronna and quetta represent 1027 and 1030, and ronto and quecto signify 10−27 and 10−30. Earth weighs around one ronnagram, and an electron’s mass is about one quectogram.

This is the first update to the prefix system since 1991, when the organization added zetta (1021), zepto (1021), yotta (1024) and yocto (10−24). In that case, metrologists were adapting to fit the needs of chemists, who wanted a way to express SI units on the scale of Avogadro’s number — the 6 × 1023 units in a mole, a measure of the quantity of substances. The more familiar prefixes peta and exa were added in 1975 (see ‘Extreme figures’).

Extreme figures

Advances in scientific fields have led to increasing need for prefixes to describe very large and very small numbers.

FactorNameSymbolAdopted
1030quettaQ2022
1027ronnaR2022
1024yottaY1991
1021zettaZ1991
1018exaE1975
1015petaP1975
10−15femtof1964
10−18attoa1964
10−21zeptoz1991
10−24yoctoy1991
10−27rontor2022
10−30quectoq2022

Prefixes are agreed at the General Conference on Weights and Measures.

Today, the driver is data science, says Richard Brown, a metrologist at the UK National Physical Laboratory in Teddington. He has been working on plans to introduce the latest prefixes for five years, and presented the proposal to the CGPM on 17 November. With the annual volume of data generated globally having already hit zettabytes, informal suggestions for 1027 — including ‘hella’ and ‘bronto’ — were starting to take hold, he says. Google’s unit converter, for example, already tells users that 1,000 yottabytes is 1 hellabyte, and at least one UK government website quotes brontobyte as the correct term….(More)”