Digitalization in Practice


Book edited by Jessamy Perriam and Katrine Meldgaard Kjær: “..shows that as welfare is increasingly digitalized, an investigation of the social implications of this digitalization becomes increasingly pertinent. The book offers chapters on how the state operates, from the day-to-day practices of governance to keeping registers of businesses, from overarching and sometimes contradictory policies to considering how to best include citizens in digitalized processes. Moreover, the book takes a citizen perspective on key issues of access, identification and social harm to consider the social implications of digitalization in the everyday. The diversity of topics in Digitalization in Practice reflects how digitalization as an ongoing process and practice fundamentally impacts and often reshapes the relationship between states and citizens.

  • Provides much needed critical perspectives on digital states in practice.
  • Opens up provocative questions for further studies and research topics in digital states.
  • Showcases empirical studies of situations where digital states are enacted…(More)”.

More Questions Than Flags: Reality Check on DSA’s Trusted Flaggers


Article by Ramsha Jahangir, Elodie Vialle and Dylan Moses: “It’s been 100 days since the Digital Services Act (DSA) came into effect, and many of us are still wondering how the Trusted Flagger mechanism is taking shape, particularly for civil society organizations (CSOs) that could be potential applicants.

With an emphasis on accountability and transparency, the DSA requires national coordinators to appoint Trusted Flaggers, who are designated entities whose requests to flag illegal content must be prioritized. “Notices submitted by Trusted Flaggers acting within their designated area of expertise . . . are given priority and are processed and decided upon without undue delay,” according to the DSA. Trusted flaggers can include non-governmental organizations, industry associations, private or semi-public bodies, and law enforcement agencies. For instance, a private company that focuses on finding CSAM or terrorist-type content, or tracking groups that traffic in that content, could be eligible for Trusted Flagger status under the DSA. To be appointed, entities need to meet certain criteria, including being independent, accurate, and objective.

Trusted escalation channels are a key mechanism for civil society organizations (CSOs) supporting vulnerable users, such as human rights defenders and journalists targeted by online attacks on social media, particularly in electoral contexts. However, existing channels could be much more efficient. The DSA is a unique opportunity to redesign these mechanisms for reporting illegal or harmful content at scale. They need to be rethought for CSOs that hope to become Trusted Flaggers. Platforms often require, for instance, content to be translated into English and context to be understood by English-speaking audiences (due mainly to the fact that the key decision-makers are based in the US), which creates an added burden for CSOs that are resource-strapped. The lack of transparency in the reporting process can be distressing for the victims for whom those CSOs advocate. The lack of timely response can lead to dramatic consequences for human rights defenders and information integrity. Several CSOs we spoke with were not even aware of these escalation channels – and platforms are not incentivized to promote mechanisms given the inability to vet, prioritize and resolve all potential issues sent to them….(More)”.

The Simple Macroeconomics of AI


Paper by Daron Acemoglu: “This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value…(More)”.

Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science


Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China. 

While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.

Sorting the Self


Article by Christopher Yates: “We are unknown to ourselves, we knowers…and there is good reason for this. We have never looked for ourselves—so how are we ever supposed to find ourselves?” Much has changed since the late nineteenth century, when Nietzsche wrote those words. We now look obsessively for ourselves, and we find ourselves in myriad ways. Then we find more ways of finding ourselves. One involves a tool, around which grew a science, from which bloomed a faith, and from which fell the fruits of dogma. That tool is the questionnaire. The science is psychometrics. And the faith is a devotion to self-codification, of which the revelation of personality is the fruit.

Perhaps, whether on account of psychological evaluation and therapy, compulsory corporate assessments, spiritual direction endeavors, or just a sporting interest, you have had some experience of this phenomenon. Perhaps it has served you well. Or maybe you have puzzled over the strange avidity with which we enable standardized tests and the technicians or portals that administer them to gauge the meaning of our very being. Maybe you have been relieved to discover that, according to the 16 Personality Types assessments, you are an ISFP; or, according to the Enneagram, you are a 3 with a 2 or 4 wing. Or maybe you have been somewhat troubled by how this peculiar term personality, derived as it is from the Latin persona (meaning the masks once worn by players on stage), has become a repository of so many adjectives—one that violates Aristotle’s cardinal metaphysical rule against reducing a substance to its properties.

Either way, the self has never been more securely an object of classification than it is today, thanks to the century-long ascendence of behavioral analysis and scientific psychology, sociometry, taxonomic personology, and personality theory. Add to these the assorted psychodiagnostic instruments drawing on refinements of multiple regression analysis, and multivariate and circumplex modeling, trait determination and battery-based assessments, and the ebbs and flows of psychoanalytic theory. Not to be overlooked, of course, is the popularizing power of evidence-based objective and predictive personality profiling inside and outside the laboratory and therapy chambers since Katherine Briggs began envisioning what would become the fabled person-sorting Myers-Briggs Type Indicator (MBTI) in 1919. A handful of phone calls, psychological referrals, job applications, and free or modestly priced hyperlinked platforms will place before you (and the eighty million or more other Americans who take these tests annually) more than two thousand personality assessments promising to crack your code. Their efficacy has become an object of our collective speculation. And by many accounts, their revelations make us not only known but also more empowered to live healthy and fulfilling lives. Nietzsche had many things, but he did not have PersonalityMax.com or PersonalityAssessor.com…(More)”.

When Online Content Disappears


Pew Research: “The internet is an unimaginably vast repository of modern life, with hundreds of billions of indexed webpages. But even as users across the world rely on the web to access books, images, news articles and other resources, this content sometimes disappears from view…

  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
A line chart showing that 38% of webpages from 2013 are no longer accessible
  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:

  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists...(More)”.

On the Meaning of Community Consent in a Biorepository Context


Article by Astha Kapoor, Samuel Moore, and Megan Doerr: “Biorepositories, vital for medical research, collect and store human biological samples and associated data for future use. However, our reliance solely on the individual consent of data contributors for biorepository data governance is becoming inadequate. Big data analysis focuses on large-scale behaviors and patterns, shifting focus from singular data points to identifying data “journeys” relevant to a collective. The individual becomes a small part of the analysis, with the harms and benefits emanating from the data occurring at an aggregated level.

Community refers to a particular qualitative aspect of a group of people that is not well captured by quantitative measures in biorepositories. This is not an excuse to dodge the question of how to account for communities in a biorepository context; rather, it shows that a framework is needed for defining different types of community that may be approached from a biorepository perspective. 

Engaging with communities in biorepository governance presents several challenges. Moving away from a purely individualized understanding of governance towards a more collectivizing approach necessitates an appreciation of the messiness of group identity, its ephemerality, and the conflicts entailed therein. So while community implies a certain degree of homogeneity (i.e., that all members of a community share something in common), it is important to understand that people can simultaneously consider themselves a member of a community while disagreeing with many of its members, the values the community holds, or the positions for which it advocates. The complex nature of community participation therefore requires proper treatment for it to be useful in a biorepository governance context…(More)”.

Multiple Streams and Policy Ambiguity


Book by Rob A. DeLeo, Reimut Zohlnhöfer and Nikolaos Zahariadis: “The last decade has seen a proliferation of research bolstering the theoretical and methodological rigor of the Multiple Streams Framework (MSF), one of the most prolific theories of agenda-setting and policy change. This Element sets out to address some of the most prominent criticisms of the theory, including the lack of empirical research and the inconsistent operationalization of key concepts, by developing the first comprehensive guide for conducting MSF research. It begins by introducing the MSF, including key theoretical constructs and hypotheses. It then presents the most important theoretical extensions of the framework and articulates a series of best practices for operationalizing, measuring, and analyzing MSF concepts. It closes by exploring existing gaps in MSF research and articulating fruitful areas of future research…(More)”.

How Open-Source Software Empowers Nonprofits And The Global Communities They Serve


Article by Steve Francis: “One particular area where this challenge is evident is climate. Thousands of nonprofits strive to address the effects of a changing climate and its impact on communities worldwide. Headlines often go to big organizations doing high-profile work (planting trees, for instance) in well-known places. Money goes to large-scale commercial agriculture or new technologies — because that’s where profits are most easily made. But thousands of other communities of small farmers that aren’t as visible or profitable need help too. These communities come together to tackle a number of interrelated problems: climate, soil health and productivity, biodiversity and human health and welfare. They envision a more sustainable future.

The reality is that software is crafted to meet market needs, but these communities don’t represent a profitable market. Every major industry has its own software applications and a network of consultants to tune that software for optimal performance. A farm cooperative in less developed parts of the world seeking to maximize value for sustainably harvested produce faces very different challenges than do any of these business users. Often they need to collect and manipulate data in the field, on whatever mobile device they have, with little or no connectivity. Modern software systems are rarely designed to operate in such an environment; they assume the latest devices and continuous connectivity…(More)”.

May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases


Book by Alex Edmans: “Our lives are minefields of misinformation. It ripples through our social media feeds, our daily headlines, and the pronouncements of politicians, executives, and authors. Stories, statistics, and studies are everywhere, allowing people to find evidence to support whatever position they want. Many of these sources are flawed, yet by playing on our emotions and preying on our biases, they can gain widespread acceptance, warp our views, and distort our decisions.

In this eye-opening book, renowned economist Alex Edmans teaches us how to separate fact from fiction. Using colorful examples—from a wellness guru’s tragic but fabricated backstory to the blunders that led to the Deepwater Horizon disaster to the diet that ensnared millions yet hastened its founder’s death—Edmans highlights the biases that cause us to mistake statements for facts, facts for data, data for evidence, and evidence for proof.

Armed with the knowledge of what to guard against, he then provides a practical guide to combat this tide of misinformation. Going beyond simply checking the facts and explaining individual statistics, Edmans explores the relationships between statistics—the science of cause and effect—ultimately training us to think smarter, sharper, and more critically. May Contain Lies is an essential read for anyone who wants to make better sense of the world and better decisions…(More)”.