The Myth of Objective Data


Article by Melanie Feinberg: “The notion that human judgment pollutes scientific attempts to understand natural phenomena as they really are may seem like a stable and uncontroversial value. However, as Lorraine Daston and Peter Galison have established, objectivity is a fairly recent historical development.

In Daston and Galison’s account, which focuses on scientific visualization, objectivity arose in the 19th century, congruent with the development of photography. Before photography, scientific illustration attempted to portray an ideal exemplar rather than an actually existing specimen. In other words, instead of drawing a realistic portrait of an individual fruit fly — which has unique, idiosyncratic characteristics — an 18th-century scientific illustrator drew an ideal fruit fly. This ideal representation would better portray average fruit fly characteristics, even as no actual fruit fly is ever perfectly average.

With the advent of photography, drawings of ideal types began to lose favor. The machinic eye of the lens was seen as enabling nature to speak for itself, providing access to a truer, more objective reality than the human eye of the illustrator. Daston and Galison emphasize, however, that this initial confidence in the pure eye of the machine was swiftly undermined. Scientists soon realized that photographic devices introduce their own distortions into the images that they produce, and that no eye provides an unmediated view onto nature. From the perspective of scientific visualization, the idea that machines allow us to see true has long been outmoded. In everyday discourse, however, there is a continuing tendency to characterize the objective as that which speaks for itself without the interference of human perception, interpretation, judgment, and so on.

This everyday definition of objectivity particularly affects our understanding of data collection. If in our daily lives we tend to overlook the diverse, situationally textured sense-making actions that information seekers, conversation listeners, and other recipients of communicative acts perform to make automated information systems function, we are even less likely to acknowledge and value the interpretive work of data collectors, even as these actions create the conditions of possibility upon which data analysis can operate…(More)”.

What AI Means For Animals


Article by Peter Singer and Tse Yip Fai: “The ethics of artificial intelligence has attracted considerable attention, and for good reason. But the ethical implications of AI for billions of nonhuman animals are not often discussed. Given the severe impacts some AI systems have on huge numbers of animals, this lack of attention is deeply troubling.

As more and more AI systems are deployed, they are beginning to directly impact animals in factory farms, zoospet care and through drones that target animals. AI also has indirect impacts on animals, both good and bad — it can be used to replace some animal experiments, for example, or to decode animal “languages.” AI can also propagate speciesist biases — try searching “chicken” on any search engine and see if you get more pictures of living chickens or dead ones. While all of these impacts need ethical assessment, the area in which AI has by far the most significant impact on animals is factory farming. The use of AI in factory farms will, in the long run, increase the already huge number of animals who suffer in terrible conditions.

AI systems in factory farms can monitor animals’ body temperature, weight and growth rates and detect parasitesulcers and injuriesMachine learning models can be created to see how physical parameters relate to rates of growth, disease, mortality and — the ultimate criterion — profitability. The systems can then prescribe treatments for diseases or vary the quantity of food provided. In some cases, they can use their connected physical components to act directly on the animals, emitting sounds to interact with them — giving them electric shocks (when the grazing animal reaches the boundary of the desired area, for example), marking and tagging their bodies or catching and separating them.

You might be thinking that this would benefit the animals — that it means they will get sick less often, and when they do get sick, the problems will be quickly identified and cured, with less room for human error. But the short-term animal welfare benefits brought about by AI are, in our view, clearly outweighed by other consequences…(More)” See also: AI Ethics: The Case for Including Animals.

The Surveillance Ad Model Is Toxic — Let’s Not Install Something Worse


Article by Elizabeth M. Renieris: “At this stage, law and policy makerscivil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.

In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.

But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal…(More)”.

You Can’t Regulate What You Don’t Understand


Article by Tim O’Reilly: “The world changed on November 30, 2022 as surely as it did on August 12, 1908 when the first Model T left the Ford assembly line. That was the date when OpenAI released ChatGPT, the day that AI emerged from research labs into an unsuspecting world. Within two months, ChatGPT had over a hundred million users—faster adoption than any technology in history.

The hand wringing soon began…

All of these efforts reflect the general consensus that regulations should address issues like data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI safety and responsibility guidelines cite those same goals, but in addition call out what many people consider the central, most general question: how do we align AI-based decisions with human values? They write:

“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”

But whose human values? Those of the benevolent idealists that most AI critics aspire to be? Those of a public company bound to put shareholder value ahead of customers, suppliers, and society as a whole? Those of criminals or rogue states bent on causing harm to others? Those of someone well meaning who, like Aladdin, expresses an ill-considered wish to an all-powerful AI genie?

There is no simple way to solve the alignment problem. But alignment will be impossible without robust institutions for disclosure and auditing. If we want prosocial outcomes, we need to design and report on the metrics that explicitly aim for those outcomes and measure the extent to which they have been achieved. That is a crucial first step, and we should take it immediately. These systems are still very much under human control. For now, at least, they do what they are told, and when the results don’t match expectations, their training is quickly improved. What we need to know is what they are being told.

What should be disclosed? There is an important lesson for both companies and regulators in the rules by which corporations—which science-fiction writer Charlie Stross has memorably called “slow AIs”—are regulated. One way we hold companies accountable is by requiring them to share their financial results compliant with Generally Accepted Accounting Principles or the International Financial Reporting Standards. If every company had a different way of reporting its finances, it would be impossible to regulate them…(More)”

Modernizing philanthropy for the 21st century


Essay by Stefaan G. Verhulst, Lisa T. Moretti, Hannah Chafetz and Alex Fischer: “…How can philanthropies move in a more deliberate yet responsible manner toward using data to advance their goals? The purpose of this article is to propose an overview of existing and potential qualitative and quantitative data innovations within the philanthropic sector. In what follows, we examine four areas where there is a need for innovation in how philanthropy works, and eight pathways for the responsible use of data innovations to address existing shortcomings.

Four areas for innovation

In order to identify potential data-led solutions, we need to begin by understanding current shortcomings. Through our research, we identified four areas within philanthropy that are ripe for data-led innovation:

  • First, there is a need for innovation in the identification of shared questions and overlapping priorities among communities, public service, and philanthropy. The philanthropic sector is well placed to enable a new combination of approaches, products, and processes while still enabling communities to prioritize the issues that matter most.
  • Second, there is a need to improve coordination and transparency across the sector. Even when shared priorities are identified, there often remains a large gap between the imperatives of building common agendas and the ability to act on those agendas in a coordinated and strategic way. New ways to collect and generate cross-sector shared intelligence are needed to better design funding strategies and make difficult trade-off choices.
  • Third, reliance on fixed-project-based funding often means that philanthropists must wait for impact reports to assess results. There is a need to enable iteration and adaptive experimentation to help foster a culture of greater flexibility, agility, learning, and continuous improvement.
  • Lastly, innovations for impact assessments and accountability could help philanthropies better understand how their funding and support have impacted the populations they intend to serve.

Needless to say, data alone cannot address all of these shortcomings. For true innovation, qualitative and quantitative data must be combined with a much wider range of human, institutional, and cultural change. Nonetheless, our research indicates that when used responsibly, data-driven methods and tools do offer pathways for success. We examine some of those pathways in the next section.

Eight pathways for data-driven innovations in philanthropy

The sources of data today available to philanthropic organizations are multifarious, enabled by advancements in digital technologies such as low-cost sensors, mobile devices, apps, wearables, and the increasing number of objects connected to the Internet of Things. The ways in which this data can be deployed are similarly varied. In the below, we examine eight pathways in particular for data-led innovation…(More)”.

Recalibrating assumptions on AI


Essay by Arthur Holland Michel: “Many assumptions about artificial intelligence (AI) have become entrenched despite the lack of evidence to support them. Basing policies on these assumptions is likely to increase the risk of negative impacts for certain demographic groups. These dominant assumptions include claims that AI is ‘intelligent’ and ‘ethical’, that more data means better AI, and that AI development is a ‘race’.

The risks of this approach to AI policymaking are often ignored, while the potential positive impacts of AI tend to be overblown. By illustrating how a more evidence-based, inclusive discourse can improve policy outcomes, this paper makes the case for recalibrating the conversation around AI policymaking…(More)”

Institutional review boards need new skills to review data sharing and management plans


Article by Vasiliki Rahimzadeh, Kimberley Serpico & Luke Gelinas: “New federal rules require researchers to submit plans for how to manage and share their scientific data, but institutional ethics boards may be underprepared to review them.

Data sharing is widely considered a conduit to scientific progress, the benefits of which should return to individuals and communities who invested in that science. This is the central premise underpinning changes recently announcement by the US Office of Science Technology and Policy (OSTP)1 on sharing and managing data generated from federally funded research. Researchers will now be required to make publicly accessible any scholarly publications stemming from their federally funded research, as well as supporting data, according to the OSTP announcement. However, the attendant risks to individuals’ privacy-related interests and the increasing threat of community-based harms remain barriers to fostering a trustworthy ecosystem of biomedical data science.

Institutional review boards (IRBs) are responsible for ensuring protections for all human participants engaged in research, but they rarely include members with specialized expertise needed to effectively minimize data privacy and security risks. IRBs must be prepared to meet these review demands given the new data sharing policy changes. They will need additional resources to conduct quality and effective reviews of data management and sharing (DMS) plans. Practical ways forward include expanding IRB membership, proactively consulting with researchers, and creating new research compliance resources. This Comment will focus on data management and sharing oversight by IRBs in the US, but the globalization of data science research underscores the need for enhancing similar review capacities in data privacy, management and security worldwide…(More)”.

The Real Opportunities for Empowering People through Behavioral Science


Essay by Michael Hallsworth: “…There’s much to be gained by broadening out from designing choice architecture with little input from those who use it. But I think we need to change the way we talk about the options available.

Let’s start by noting that attention has focused on three opportunities in particular: nudge plus, self-nudges, and boosts.

Nudge plus is where a prompt to encourage reflection is built into the design and delivery of a nudge (or occurs close to it). People cannot avoid being made aware of the nudge and its purpose, enabling them to decide whether they approve of it or not. While some standard nudges, like commitment devices, already contain an element of self-reflection, a nudge plus must include an “active trigger.”

self-nudge is where someone designs a nudge to influence their own behavior. In other words, they “structure their own decision environments” to make an outcome they desire more likely. An example might be creating a reminder to store snacks in less obvious and accessible places after they are bought.

Boosts emerge from the perspective that many of the heuristics we use to navigate our lives are useful and can be taught. A boost is when someone is helped to develop a skill, based on behavioral science, that will allow them to exercise their own agency and achieve their goals. Boosts aim at building people’s competences to influence their own behavior, whereas nudges try to alter the surrounding context and leave such competences unchanged.

When these ideas are discussed, there is often an underlying sense of “we need to move away from nudging and towards these approaches.” But to frame things this way neglects the crucial question of how empowerment actually happens.   

Right now, there is often a simplistic division between disempowering nudges on one side and enabling nudge plus/self-nudges/boosts on the other. In fact, these labels disguise two real drivers of empowerment that cut across the categories. They are:

  1. How far a person performing the behavior is involved in shaping the initiative itself. They could not be involved at all, involved in co-designing the intervention, or initiating and driving the intervention itself.
  2. The level and nature of any capacity created by the intervention. It may create none (i.e., have no cognitive or motivational effects), it may create awareness (i.e., the ability to reflect on what is happening), or it may build the ability to carry out an action (e.g., a skill).

The figure below shows how the different proposals map against these two drivers.


Source: Hallsworth, M. (2023). A Manifesto for Applying Behavioral Science.

A major point this figure calls attention to is co-design, which uses creative methods “to engage citizens, stakeholders and officials in an iterative process to respond to shared problems.” In other words, the people affected by an issue or change are involved as participants, rather than subjects. This involvement is intended to create more effective, tailored, and appropriate interventions that respond to a broader range of evidence…(More)”.

A case for democracy’s digital playground


Article by Petr Špecián: “Institutions are societies’ building blocks. Their role in shaping and channelling human potential is crucial. Yet the vast space of possible institutional designs remains largely unexplored…In the institutional landscape, there are plenty of alternative designs to explore. Some of them, such as replacing elected representation with sortition, look promising. But if they appear only faintly through the mist of uncertainty, their implementation would be an overly risky endeavour. We need more data to get a better idea of our options.lly.

To explore alternative designs for the institutional landscape, we first need more data. I propose testing new institutional designs in a ‘digital playground’ of democracy

Currently, the multitude of reform proposals overwhelms the modest capacities available for their empirical testing. Only those most prominent — such as deliberative democracy — command enough resources to enable serious examination.

And the stakes are momentous. What if a radical reform of the political institutions proves disastrous? Clever speculations combined with scant experimental evidence cannot dispel reasonable doubts.

This is where my proposal for democracy’s digital playground comes in….Democracy’s digital playground is an artificial world in which institutional mechanisms are tested and compete against each other.

In some ways, it resembles massive multiplayer online games that emulate many of the real world’s crucial features. These games encourage people to work together to overcome challenges, which then motivates them to create political institutions conducive to their efforts. They can also migrate between communities, revealing their preference for alternative modes of governance.

A ‘digital playground’ of democracy emulates real-world features. It encourages people to work together to overcome challenges, thus creating conducive political institutions

That said, digital game-worlds in their current form have limited use for democratic experimentation. Their institution-building tools are crude, since much of the cooperation and  conflict resolution  happens outside the game environment itself, through forums and chats. Nor do these communities accurately represent the diversity of populations in real-world democracies. Players are predominantly young males with ample free time. And the games’ commercial purpose hinders the researchers’ quest for knowledge, too.

But perhaps these digital worlds can be adapted. Compared with the current methods used to test institutional mechanisms, they offer many advantages. Transparency is one such: a human-designed world is less opaque than the natural world. Easy participation represents another: regardless of location or resources, diverse people may join the community.

However, most important of all is the opportunity to calibrate the digital worlds as an optimum risk environment…(More)”.

Outsourcing Virtue


Essay by  L. M. Sacasas: “To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.

Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:

Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.

This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim “the algorithm made me do it,” and it becomes so, in part, because the existing bureaucratic dynamics all but require it…(More)”.