Modernizing philanthropy for the 21st century


Essay by Stefaan G. Verhulst, Lisa T. Moretti, Hannah Chafetz and Alex Fischer: “…How can philanthropies move in a more deliberate yet responsible manner toward using data to advance their goals? The purpose of this article is to propose an overview of existing and potential qualitative and quantitative data innovations within the philanthropic sector. In what follows, we examine four areas where there is a need for innovation in how philanthropy works, and eight pathways for the responsible use of data innovations to address existing shortcomings.

Four areas for innovation

In order to identify potential data-led solutions, we need to begin by understanding current shortcomings. Through our research, we identified four areas within philanthropy that are ripe for data-led innovation:

  • First, there is a need for innovation in the identification of shared questions and overlapping priorities among communities, public service, and philanthropy. The philanthropic sector is well placed to enable a new combination of approaches, products, and processes while still enabling communities to prioritize the issues that matter most.
  • Second, there is a need to improve coordination and transparency across the sector. Even when shared priorities are identified, there often remains a large gap between the imperatives of building common agendas and the ability to act on those agendas in a coordinated and strategic way. New ways to collect and generate cross-sector shared intelligence are needed to better design funding strategies and make difficult trade-off choices.
  • Third, reliance on fixed-project-based funding often means that philanthropists must wait for impact reports to assess results. There is a need to enable iteration and adaptive experimentation to help foster a culture of greater flexibility, agility, learning, and continuous improvement.
  • Lastly, innovations for impact assessments and accountability could help philanthropies better understand how their funding and support have impacted the populations they intend to serve.

Needless to say, data alone cannot address all of these shortcomings. For true innovation, qualitative and quantitative data must be combined with a much wider range of human, institutional, and cultural change. Nonetheless, our research indicates that when used responsibly, data-driven methods and tools do offer pathways for success. We examine some of those pathways in the next section.

Eight pathways for data-driven innovations in philanthropy

The sources of data today available to philanthropic organizations are multifarious, enabled by advancements in digital technologies such as low-cost sensors, mobile devices, apps, wearables, and the increasing number of objects connected to the Internet of Things. The ways in which this data can be deployed are similarly varied. In the below, we examine eight pathways in particular for data-led innovation…(More)”.

Recalibrating assumptions on AI


Essay by Arthur Holland Michel: “Many assumptions about artificial intelligence (AI) have become entrenched despite the lack of evidence to support them. Basing policies on these assumptions is likely to increase the risk of negative impacts for certain demographic groups. These dominant assumptions include claims that AI is ‘intelligent’ and ‘ethical’, that more data means better AI, and that AI development is a ‘race’.

The risks of this approach to AI policymaking are often ignored, while the potential positive impacts of AI tend to be overblown. By illustrating how a more evidence-based, inclusive discourse can improve policy outcomes, this paper makes the case for recalibrating the conversation around AI policymaking…(More)”

Institutional review boards need new skills to review data sharing and management plans


Article by Vasiliki Rahimzadeh, Kimberley Serpico & Luke Gelinas: “New federal rules require researchers to submit plans for how to manage and share their scientific data, but institutional ethics boards may be underprepared to review them.

Data sharing is widely considered a conduit to scientific progress, the benefits of which should return to individuals and communities who invested in that science. This is the central premise underpinning changes recently announcement by the US Office of Science Technology and Policy (OSTP)1 on sharing and managing data generated from federally funded research. Researchers will now be required to make publicly accessible any scholarly publications stemming from their federally funded research, as well as supporting data, according to the OSTP announcement. However, the attendant risks to individuals’ privacy-related interests and the increasing threat of community-based harms remain barriers to fostering a trustworthy ecosystem of biomedical data science.

Institutional review boards (IRBs) are responsible for ensuring protections for all human participants engaged in research, but they rarely include members with specialized expertise needed to effectively minimize data privacy and security risks. IRBs must be prepared to meet these review demands given the new data sharing policy changes. They will need additional resources to conduct quality and effective reviews of data management and sharing (DMS) plans. Practical ways forward include expanding IRB membership, proactively consulting with researchers, and creating new research compliance resources. This Comment will focus on data management and sharing oversight by IRBs in the US, but the globalization of data science research underscores the need for enhancing similar review capacities in data privacy, management and security worldwide…(More)”.

The Real Opportunities for Empowering People through Behavioral Science


Essay by Michael Hallsworth: “…There’s much to be gained by broadening out from designing choice architecture with little input from those who use it. But I think we need to change the way we talk about the options available.

Let’s start by noting that attention has focused on three opportunities in particular: nudge plus, self-nudges, and boosts.

Nudge plus is where a prompt to encourage reflection is built into the design and delivery of a nudge (or occurs close to it). People cannot avoid being made aware of the nudge and its purpose, enabling them to decide whether they approve of it or not. While some standard nudges, like commitment devices, already contain an element of self-reflection, a nudge plus must include an “active trigger.”

self-nudge is where someone designs a nudge to influence their own behavior. In other words, they “structure their own decision environments” to make an outcome they desire more likely. An example might be creating a reminder to store snacks in less obvious and accessible places after they are bought.

Boosts emerge from the perspective that many of the heuristics we use to navigate our lives are useful and can be taught. A boost is when someone is helped to develop a skill, based on behavioral science, that will allow them to exercise their own agency and achieve their goals. Boosts aim at building people’s competences to influence their own behavior, whereas nudges try to alter the surrounding context and leave such competences unchanged.

When these ideas are discussed, there is often an underlying sense of “we need to move away from nudging and towards these approaches.” But to frame things this way neglects the crucial question of how empowerment actually happens.   

Right now, there is often a simplistic division between disempowering nudges on one side and enabling nudge plus/self-nudges/boosts on the other. In fact, these labels disguise two real drivers of empowerment that cut across the categories. They are:

  1. How far a person performing the behavior is involved in shaping the initiative itself. They could not be involved at all, involved in co-designing the intervention, or initiating and driving the intervention itself.
  2. The level and nature of any capacity created by the intervention. It may create none (i.e., have no cognitive or motivational effects), it may create awareness (i.e., the ability to reflect on what is happening), or it may build the ability to carry out an action (e.g., a skill).

The figure below shows how the different proposals map against these two drivers.


Source: Hallsworth, M. (2023). A Manifesto for Applying Behavioral Science.

A major point this figure calls attention to is co-design, which uses creative methods “to engage citizens, stakeholders and officials in an iterative process to respond to shared problems.” In other words, the people affected by an issue or change are involved as participants, rather than subjects. This involvement is intended to create more effective, tailored, and appropriate interventions that respond to a broader range of evidence…(More)”.

A case for democracy’s digital playground


Article by Petr Špecián: “Institutions are societies’ building blocks. Their role in shaping and channelling human potential is crucial. Yet the vast space of possible institutional designs remains largely unexplored…In the institutional landscape, there are plenty of alternative designs to explore. Some of them, such as replacing elected representation with sortition, look promising. But if they appear only faintly through the mist of uncertainty, their implementation would be an overly risky endeavour. We need more data to get a better idea of our options.lly.

To explore alternative designs for the institutional landscape, we first need more data. I propose testing new institutional designs in a ‘digital playground’ of democracy

Currently, the multitude of reform proposals overwhelms the modest capacities available for their empirical testing. Only those most prominent — such as deliberative democracy — command enough resources to enable serious examination.

And the stakes are momentous. What if a radical reform of the political institutions proves disastrous? Clever speculations combined with scant experimental evidence cannot dispel reasonable doubts.

This is where my proposal for democracy’s digital playground comes in….Democracy’s digital playground is an artificial world in which institutional mechanisms are tested and compete against each other.

In some ways, it resembles massive multiplayer online games that emulate many of the real world’s crucial features. These games encourage people to work together to overcome challenges, which then motivates them to create political institutions conducive to their efforts. They can also migrate between communities, revealing their preference for alternative modes of governance.

A ‘digital playground’ of democracy emulates real-world features. It encourages people to work together to overcome challenges, thus creating conducive political institutions

That said, digital game-worlds in their current form have limited use for democratic experimentation. Their institution-building tools are crude, since much of the cooperation and  conflict resolution  happens outside the game environment itself, through forums and chats. Nor do these communities accurately represent the diversity of populations in real-world democracies. Players are predominantly young males with ample free time. And the games’ commercial purpose hinders the researchers’ quest for knowledge, too.

But perhaps these digital worlds can be adapted. Compared with the current methods used to test institutional mechanisms, they offer many advantages. Transparency is one such: a human-designed world is less opaque than the natural world. Easy participation represents another: regardless of location or resources, diverse people may join the community.

However, most important of all is the opportunity to calibrate the digital worlds as an optimum risk environment…(More)”.

Outsourcing Virtue


Essay by  L. M. Sacasas: “To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.

Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:

Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.

This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim “the algorithm made me do it,” and it becomes so, in part, because the existing bureaucratic dynamics all but require it…(More)”.

Data Reboot: 10 Reasons why we need to change how we approach data in today’s society


Article by Stefaan Verhulst and Julia Stamm:”…In the below, we consider 10 reasons why we need to reboot the data conversations and change our approach to data governance…

1. Data is not the new oil: This phrase, sometimes attributed to Clive Humby in 2006, has become a staple of media and other commentaries. In fact, the analogy is flawed in many ways. As Mathias Risse, from the Carr Center for Human Rights Policy at Harvard, points out, oil is scarce, fungible, and rivalrous (can be used and owned by a single entity). Data, by contrast, possesses none of these properties. In particular, as we explain further below, data is shareable (i.e., non-rivalrous); its societal and economic value also greatly increases through sharing. The data-as-oil analogy should thus be discarded, both because it is inaccurate and because it artificially inhibits the potential of data.

2. Not all data is equal: Assessing the value of data can be challenging, leading many organizations to treat (e.g., collect and store) all data equally. The value of data varies widely, however, depending on context, use case, and the underlying properties of the data (the information it contains, its quality, etc.). Establishing metrics or processes to accurately value data is therefore essential. This is particularly true as the amount of data continues to explode, potentially exceeding stakeholders’ ability to store or process all generated data.

3. Weighing Risks and Benefits of data use: Following a string of high-profile privacy violations in recent years, public and regulatory attention has largely focused on the risks associated with data, and steps required to minimize those risks. Such concerns are, of course, valid and important. At the same time, a sole focus on preventing harms has led to artificial limits on maximizing the potential benefits of data — or, put another way, on the risks of not using data. It is time to apply a more balanced approach, one that weighs risks against benefits. By freeing up large amounts of currently siloed and unused data, such a responsible data framework could unleash huge amounts of social innovation and public benefit….

7. From individual consent to a social license: Social license refers to the informal demands or expectations set by society on how data may be used, reused, and shared. The notion, which originates in the field of environmental resource management, recognizes that social license may not overlap perfectly with legal or regulatory license. In some cases, it may exceed formal approvals for how data can be used, and in others, it may be more limited. Either way, public trust is as essential as legal compliance — a thriving data ecology can only exist if data holders and other stakeholders operate within the boundaries of community norms and expectations.

8. From data ownership to data stewardship: Many of the above propositions add up to an implicit recognition that we need to move beyond notions of ownership when it comes to data. As a non-rivalrous public good, data offers massive potential for the public good and social transformation. That potential varies by context and use case; sharing and collaboration are essential to ensuring that the right data is brought to bear on the most relevant social problems. A notion of stewardship — which recognizes that data is held in public trust, available to be shared in a responsible manner — is thus more helpful (and socially beneficial) than outdated notions of ownership. A number of tools and mechanisms exist to encourage stewardship and sharing. As we have elsewhere written, data collaboratives are among the most promising.

9. Data Asymmetries: Data, it was often proclaimed, would be a harbinger of greater societal prosperity and well being. The era of big data was to usher in a new tide of innovation and economic growth that would lift all boats. The reality has been somewhat different. The era of big data has rather been characterized by persistent, and in many ways worsening, asymmetries. These manifest in inequalities in access to data itself, and, more problematically, inequalities in the way the social and economic fruits of data are being distributed. We thus need to reconceptualize our approach to data, ensuring that its benefits are more equitably spread, and that it does not in fact end up exacerbating the widespread and systematic inequalities that characterize our times.

10. Reconceptualizing self-determination…(More)” (First published as Data Reboot: 10 Gründe, warum wir unseren Umgang mit Daten ändern müssen at 1E9).

The Case for Including Data Stewardship in ESG


Article by Stefaan Verhulst: “Amid all the attention to environmental, social, and governance factors in investing, better known as ESG, there has been relatively little emphasis on governance, and even less on data governance. This is a significant oversight that needs to be addressed, as data governance has a crucial role to play in achieving environmental and social goals. 

Data stewardship in particular should be considered an important ESG practice. Making data accessible for reuse in the public interest can promote social and environmental goals while boosting a company’s efficiency and profitability. And investing in companies with data-stewardship capabilities makes good sense. But first, we need to move beyond current debates on data and ESG.

Several initiatives have begun to focus on data as it relates to ESG. For example, a recent McKinsey report on ESG governance within the banking sector argues that banks “will need to adjust their data architecture, define a data collection strategy, and reorganize their data governance model to successfully manage and report ESG data.” Deloitte recognizes the need for “a robust ESG data strategy.” PepsiCo likewise highlights its ESG Data Governance Program, and Maersk emphasizes data ethics as a key component in its ESG priorities.

These efforts are meaningful, but they are largely geared toward using data to measure compliance with environmental and social commitments. They don’t do much to help us understand how companies are leveraging data as an asset to achieve environmental and social goals. In particular, as I‘ve written elsewhere, data stewardship by which privately held data is reused for public interest purposes is an important new component of corporate social responsibility, as well as a key tool in data governance. Too many data-governance efforts are focused simply on using data to measure compliance or impact. We need to move beyond that mindset. Instead, we should adopt a data stewardship approach, where data is made accessible for the public good. There are promising signs of change in this direction…(More)”.

We need a much more sophisticated debate about AI


Article by Jamie Susskind: “Twentieth-century ways of thinking will not help us deal with the huge regulatory challenges the technology poses…The public debate around artificial intelligence sometimes seems to be playing out in two alternate realities.

In one, AI is regarded as a remarkable but potentially dangerous step forward in human affairs, necessitating new and careful forms of governance. This is the view of more than a thousand eminent individuals from academia, politics, and the tech industry who this week used an open letter to call for a six-month moratorium on the training of certain AI systems. AI labs, they claimed, are “locked in an out-of-control race to develop and deploy ever more powerful digital minds”. Such systems could “pose profound risks to society and humanity”. 

On the same day as the open letter, but in a parallel universe, the UK government decided that the country’s principal aim should be to turbocharge innovation. The white paper on AI governance had little to say about mitigating existential risk, but lots to say about economic growth. It proposed the lightest of regulatory touches and warned against “unnecessary burdens that could stifle innovation”. In short: you can’t spell “laissez-faire” without “AI”. 

The difference between these perspectives is profound. If the open letter is taken at face value, the UK government’s approach is not just wrong, but irresponsible. And yet both viewpoints are held by reasonable people who know their onions. They reflect an abiding political disagreement which is rising to the top of the agenda.

But despite this divergence there are four ways of thinking about AI that ought to be acceptable to both sides.

First, it is usually unhelpful to debate the merits of regulation by reference to a particular crisis (Cambridge Analytica), technology (GPT-4), person (Musk), or company (Meta). Each carries its own problems and passions. A sound regulatory system will be built on assumptions that are sufficiently general in scope that they will not immediately be superseded by the next big thing. Look at the signal, not the noise…(More)”.

Can A.I. and Democracy Fix Each Other?


Peter Coy at The New York Times: “Democracy isn’t working very well these days, and artificial intelligence is scaring the daylights out of people. Some creative people are looking at those two problems and envisioning a solution: Democracy fixes A.I., and A.I. fixes democracy.

Attitudes about A.I. are polarized, with some focusing on its promise to amplify human potential and others dwelling on what could go wrong (and what has already gone wrong). We need to find a way out of the impasse, and leaving it to the tech bros isn’t the answer. Democracy — giving everyone a voice on policy — is clearly the way to go.

Democracy can be taken hostage by partisans, though. That’s where artificial intelligence has a role to play. It can make democracy work better by surfacing ideas from everyone, not just the loudest. It can find surprising points of agreement among seeming antagonists and summarize and digest public opinion in a way that’s useful to government officials. Assisting democracy is a more socially valuable function for large language models than, say, writing commercials for Spam in iambic pentameter.The goal, according to the people I spoke to, is to make A.I. part of the solution, not just part of the problem…(More)” (See also: Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern…)”.