A New National Purpose: Harnessing Data for Health


Report by the Tony Blair Institute: “We are at a pivotal moment where the convergence of large health and biomedical data sets, artificial intelligence and advances in biotechnology is set to revolutionise health care, drive economic growth and improve the lives of citizens. And the UK has strengths in all three areas. The immense potential of the UK’s health-data assets, from the NHS to biobanks and genomics initiatives, can unlock new diagnostics and treatments, deliver better and more personalised care, prevent disease and ultimately help people live longer, healthier lives.

However, realising this potential is not without its challenges. The complex and fragmented nature of the current health-data landscape, coupled with legitimate concerns around privacy and public trust, has made for slow progress. The UK has had a tendency to provide short-term funding across multiple initiatives, which has led to an array of individual projects – many of which have struggled to achieve long-term sustainability and deliver tangible benefits to patients.

To overcome these challenges, it will be necessary to be bold and imaginative. We must look for ways to leverage the unique strengths of the NHS, such as its nationwide reach and cradle-to-grave data coverage, to create a health-data ecosystem that is much more than the sum of its many parts. This will require us to think differently about how we collect, manage and utilise health data, and to create new partnerships and models of collaboration that break down traditional silos and barriers. It will mean treating data as a key health resource and managing it accordingly.

One model to do this is the proposed sovereign National Data Trust (NDT) – an endeavour to streamline access to and curation of the UK’s valuable health-data assets…(More)”.

Artificial intelligence, the common good, and the democratic deficit in AI governance


Paper by Mark Coeckelbergh: “There is a broad consensus that artificial intelligence should contribute to the common good, but it is not clear what is meant by that. This paper discusses this issue and uses it as a lens for analysing what it calls the “democracy deficit” in current AI governance, which includes a tendency to deny the inherently political character of the issue and to take a technocratic shortcut. It indicates what we may agree on and what is and should be up to (further) deliberation when it comes to AI ethics and AI governance. Inspired by the republican tradition in political theory, it also argues for a more active role of citizens and (end-)users: not only as participants in deliberation but also in ensuring, creatively and communicatively, that AI contributes to the common good…(More)”.

Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science


Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China. 

While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.

Access to Public Information, Open Data, and Personal Data Protection: How do they dialogue with each other?


Report by Open Data Charter and Civic Compass: “In this study, we aim to examine data protection policies in the European Union and Latin America juxtaposed with initiatives concerning open government data and access to public information. We analyse the regulatory landscape, international rankings, and commitments about each right in four countries from each region to achieve this. Additionally, we explore how these institutions interact with one another, considering their respective stances while delving into existing tensions and exploring possibilities for achieving a balanced approach…(More)”.

Middle Tech: Software Work and the Culture of Good Enough


Book by Paula Bialski: “Contrary to much of the popular discourse, not all technology is seamless and awesome; some of it is simply “good enough.” In Middle Tech, Paula Bialski offers an ethnographic study of software developers at a non-flashy, non-start-up corporate tech company. Their stories reveal why software isn’t perfect and how developers communicate, care, and compromise to make software work—or at least work until the next update. Exploring the culture of good enoughness at a technology firm she calls “MiddleTech,” Bialski shows how doing good-enough work is a collectively negotiated resistance to the organizational ideology found in corporate software settings.

The truth, Bialski reminds us, is that technology breaks due to human-related issues: staff cutbacks cause media platforms to crash, in-car GPS systems cause catastrophic incidents, and chatbots can be weird. Developers must often labor to patch and repair legacy systems rather than dream up killer apps. Bialski presents a less sensationalist, more empirical portrait of technology work than the frequently told Silicon Valley narratives of disruption and innovation. She finds that software engineers at MiddleTech regard technology as an ephemeral object that only needs to be good enough to function until its next iteration. As a result, they don’t feel much pressure to make it perfect. Through the deeply personal stories of people and their practices at MiddleTech, Bialski traces the ways that workers create and sustain a complex culture of good enoughness…(More)”

How the war on drunk driving was won


Blog by Nick Cowen: “…Viewed from the 1960s it might have seemed like ending drunk driving would be impossible. Even in the 1980s, the movement seemed unlikely to succeed and many researchers questioned whether it constituted a social problem at all.

Yet things did change: in 1980, 1,450 fatalities were attributed to drunk driving accidents in the UK. In 2020, there were 220. Road deaths in general declined much more slowly, from around 6,000 in 1980 to 1,500 in 2020. Drunk driving fatalities dropped overall and as a percentage of all road deaths.

The same thing happened in the United States, though not to quite the same extent. In 1980, there were around 28,000 drunk driving deaths there, while in 2020, there were 11,654. Despite this progress, drunk driving remains a substantial public threat, comparable in scale to homicide (of which in 2020 there were 594 in Britain and 21,570 in America).

Of course, many things have happened in the last 40 years that contributed to this reduction. Vehicles are better designed to prioritize life preservation in the event of a collision. Emergency hospital care has improved so that people are more likely to survive serious injuries from car accidents. But, above all, driving while drunk has become stigmatized.

This stigma didn’t come from nowhere. Governments across the Western world, along with many civil society organizations, engaged in hard-hitting education campaigns about the risks of drunk driving. And they didn’t just talk. Tens of thousands of people faced criminal sanctions, and many were even put in jail.

Two underappreciated ideas stick out from this experience. First, deterrence works: incentives matter to offenders much more than many scholars found initially plausible. Second, the long-run impact that successful criminal justice interventions have is not primarily in rehabilitation, incapacitation, or even deterrence, but in altering the social norms around acceptable behavior…(More)”.

AI-enabled Peacekeeping Tech for the Digital Age


Springwise: “There are countless organisations and government agencies working to resolve conflicts around the globe, but they often lack the tools to know if they are making the right decisions. Project Didi is developing those technological tools – helping peacemakers plan appropriately and understand the impact of their actions in real time.

Project Didi Co-founder and CCO Gabe Freund explained to Springwise that the project uses machine learning, big data, and AI to analyse conflicts and “establish a new standard for best practice when it comes to decision-making in the world of peacebuilding.”

In essence, the company is attempting to analyse the many factors that are involved in conflict in order to identify a ‘ripe moment’ when both parties will be willing to negotiate for peace. The tools can track the impact and effect of all actors across a conflict. This allows them to identify and create connections between organisations and people who are doing similar work, amplifying their effects…(More)” See also: Project Didi (Kluz Prize)

Sorting the Self


Article by Christopher Yates: “We are unknown to ourselves, we knowers…and there is good reason for this. We have never looked for ourselves—so how are we ever supposed to find ourselves?” Much has changed since the late nineteenth century, when Nietzsche wrote those words. We now look obsessively for ourselves, and we find ourselves in myriad ways. Then we find more ways of finding ourselves. One involves a tool, around which grew a science, from which bloomed a faith, and from which fell the fruits of dogma. That tool is the questionnaire. The science is psychometrics. And the faith is a devotion to self-codification, of which the revelation of personality is the fruit.

Perhaps, whether on account of psychological evaluation and therapy, compulsory corporate assessments, spiritual direction endeavors, or just a sporting interest, you have had some experience of this phenomenon. Perhaps it has served you well. Or maybe you have puzzled over the strange avidity with which we enable standardized tests and the technicians or portals that administer them to gauge the meaning of our very being. Maybe you have been relieved to discover that, according to the 16 Personality Types assessments, you are an ISFP; or, according to the Enneagram, you are a 3 with a 2 or 4 wing. Or maybe you have been somewhat troubled by how this peculiar term personality, derived as it is from the Latin persona (meaning the masks once worn by players on stage), has become a repository of so many adjectives—one that violates Aristotle’s cardinal metaphysical rule against reducing a substance to its properties.

Either way, the self has never been more securely an object of classification than it is today, thanks to the century-long ascendence of behavioral analysis and scientific psychology, sociometry, taxonomic personology, and personality theory. Add to these the assorted psychodiagnostic instruments drawing on refinements of multiple regression analysis, and multivariate and circumplex modeling, trait determination and battery-based assessments, and the ebbs and flows of psychoanalytic theory. Not to be overlooked, of course, is the popularizing power of evidence-based objective and predictive personality profiling inside and outside the laboratory and therapy chambers since Katherine Briggs began envisioning what would become the fabled person-sorting Myers-Briggs Type Indicator (MBTI) in 1919. A handful of phone calls, psychological referrals, job applications, and free or modestly priced hyperlinked platforms will place before you (and the eighty million or more other Americans who take these tests annually) more than two thousand personality assessments promising to crack your code. Their efficacy has become an object of our collective speculation. And by many accounts, their revelations make us not only known but also more empowered to live healthy and fulfilling lives. Nietzsche had many things, but he did not have PersonalityMax.com or PersonalityAssessor.com…(More)”.

When Online Content Disappears


Pew Research: “The internet is an unimaginably vast repository of modern life, with hundreds of billions of indexed webpages. But even as users across the world rely on the web to access books, images, news articles and other resources, this content sometimes disappears from view…

  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
A line chart showing that 38% of webpages from 2013 are no longer accessible
  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:

  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists...(More)”.

Defining AI incidents and related terms


OECD Report: “As AI use grows, so do its benefits and risks. These risks can lead to actual harms (“AI incidents”) or potential dangers (“AI hazards”). Clear definitions are essential for managing and preventing these risks. This report proposes definitions for AI incidents and related terms. These definitions aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address…(More)”.