Explore our articles

Stefaan Verhulst

Article by  Gianluca Sgueo: “The underlying tenet of so-called “human centred-design” is a public administration capable of delivering a satisfactory (even gratifying) digital experience to every user. Public services, however, are still marked by severe qualitative asymmetries, both nationally and supranationally. In this article we discuss the key shortcomings of digital public spaces, and we explore three approaches to re-design such spaces with the aim to widen the existing gaps separating the ideal from the actual rendering of human-centred digital government…(More)”.

Three approaches to re-design digital public spaces 

Article by Beth Noveck: “In the first four months of the Covid-19 pandemic, government leaders paid $100 million for management consultants at McKinsey to model the spread of the coronavirus and build online dashboards to project hospital capacity.

It’s unsurprising that leaders turned to McKinsey for help, given the notorious backwardness of government technology. Our everyday experience with online shopping and search only highlights the stark contrast between user-friendly interfaces and the frustrating inefficiencies of government websites—or worse yet, the ongoing need to visit a government office to submit forms in person. The 2016 animated movie Zootopia depicts literal sloths running the DMV, a scene that was guaranteed to get laughs given our low expectations of government responsiveness.

More seriously, these doubts are reflected in the plummeting levels of public trust in government. From early Healthcare.gov failures to the more recent implosions of state unemployment websites, policymaking without attention to the technology that puts the policy into practice has led to disastrous consequences.

The root of the problem is that the government, the largest employer in the US, does not keep its employees up-to-date on the latest tools and technologies. When I served in the Obama White House as the nation’s first deputy chief technology officer, I had to learn constitutional basics and watch annual training videos on sexual harassment and cybersecurity. But I was never required to take a course on how to use technology to serve citizens and solve problems. In fact, the last significant legislation about what public professionals need to know was the Government Employee Training Act, from 1958, well before the internet was invented.

In the United States, public sector awareness of how to use data or human-centered design is very low. Out of 400-plus public servants surveyed in 2020, less than 25 percent received training in these more tech-enabled ways of working, though 70 percent said they wanted such training…(More)”.

Better Government Tech Is Possible

Article by Hélène Landemore, Andrew Sorota, and Audrey Tang: “Testifying before Congress last month about the risks of artificial intelligence, Sam Altman, the OpenAI CEO behind the massively popular large language model (LLM) ChatGPT, and Gary Marcus, a psychology professor at NYU famous for his positions against A.I. utopianism, both agreed on one point: They called for the creation of a government agency comparable to the FDA to regulate A.I. Marcus also suggested scientific experts should be given early access to new A.I. prototypes to be able to test them before they are released to the public.

Strikingly, however, neither of them mentioned the public, namely the billions of ordinary citizens around the world that the A.I. revolution, in all its uncertainty, is sure to affect. Don’t they also deserve to be included in decisions about the future of this technology?

We believe a global, democratic approach–not an exclusively technocratic one–is the only adequate answer to what is a global political and ethical challenge. Sam Altman himself stated in an earlier interview that in his “dream scenario,” a global deliberation involving all humans would be used to figure out how to govern A.I.

There are already proofs of concept for the various elements that a global, large-scale deliberative process would require in practice. By drawing on these diverse and complementary examples, we can turn this dream into a reality.

Deliberations based on random selection have grown in popularity on the local and national levels, with close to 600 cases documented by the OECD in the last 20 years. Their appeal lies in capturing a unique array of voices and lived experiences, thereby generating policy recommendations that better track the preferences of the larger population and are more likely to be accepted. Famous examples include the 2012 and 2016 Irish citizens’ assemblies on marriage equality and abortion, which led to successful referendums and constitutional change, as well as the 2019 and 2022 French citizens’ conventions on climate justice and end-of-life issues.

Taiwan has successfully experimented with mass consultations through digital platforms like Pol.is, which employs machine learning to identify consensus among vast numbers of participants. Digitally engaged participation has helped aggregate public opinion on hundreds of polarizing issues in Taiwan–such as regulating Uber–involving half of its 23.5 million people. Digital participation can also augment other smaller-scale forms of citizen deliberations, such as those taking place in person or based on random selection…(More)”.

Why picking citizens at random could be the best way to govern the A.I. revolution

Paper by Ayan Mukhopadhyay: “Emergency response management (ERM) is a challenge faced by communities across the globe. First responders must respond to various incidents, such as fires, traffic accidents, and medical emergencies. They must respond quickly to incidents to minimize the risk to human life. Consequently, considerable attention has been devoted to studying emergency incidents and response in the last several decades. In particular, data-driven models help reduce human and financial loss and improve design codes, traffic regulations, and safety measures. This tutorial paper explores four sub-problems within emergency response: incident prediction, incident detection, resource allocation, and resource dispatch. We aim to present mathematical formulations for these problems and broad frameworks for each problem. We also share open-source (synthetic) data from a large metropolitan area in the USA for future work on data-driven emergency response…(More)”.

Artificial Intelligence for Emergency Response

Article by Jack Gisby, Anna Kiknadze, Thomas Mitterling, and Isabell Roitner-Fransecky: “If you have ever used a smartwatch or other wearable tech to track your steps, heart rate, or sleep, you are part of the “quantified self” movement. You are voluntarily submitting millions of intimate data points for collection and analysis. The Economist highlighted the benefits of good quality personal health and wellness data—increased physical activity, more efficient healthcare, and constant monitoring of chronic conditions. However, not everyone is enthusiastic about this trend. Many fear corporations will use the data to discriminate against the poor and vulnerable. For example, insurance firms could exclude patients based on preconditions obtained from personal data sharing.

Can we strike a balance between protecting the privacy of individuals and gathering valuable information? This blog explores applying a synthetic populations approach in New York City,  a city with an established reputation for using big data approaches to support urban management, including for welfare provisions and targeted policy interventions.

To better understand poverty rates at the census tract level, World Data Lab, with the support of the Sloan Foundation, generated a synthetic population based on the borough of Brooklyn. Synthetic populations rely on a combination of microdata and summary statistics:

  • Microdata consists of personal information at the individual level. In the U.S., such data is available at the Public Use Microdata Area (PUMA) level. PUMA are geographic areas partitioning the state, containing no fewer than 100,000 people each. However, due to privacy concerns, microdata is unavailable at the more granular census tract level. Microdata consists of both household and individual-level information, including last year’s household income, the household size, the number of rooms, and the age, sex, and educational attainment of each individual living in the household.
  • Summary statistics are based on populations rather than individuals and are available at the census tract level, given that there are fewer privacy concerns. Census tracts are small statistical subdivisions of a county, averaging about 4,000 inhabitants. In New York City, a census tract roughly equals a building block. Similar to microdata, summary statistics are available for individuals and households. On the census tract level, we know the total population, the corresponding demographic breakdown, the number of households within different income brackets, the number of households by number of rooms, and other similar variables…(More)”.
Fighting poverty with synthetic data

Article by Heidi Reed: “When the COVID-19 pandemic first hit, businesses were faced with difficult decisions where making the ‘right choice’ just wasn’t possible. For example, if a business chose to shut down, it might protect employees from catching COVID, but at the same time, it would leave them without a paycheck. This was particularly true in the U.S. where the government played a more limited role in regulating business behavior, leaving managers and owners to make hard choices.

In this way, the pandemic is a societal paradox in which the social objectives of public health and economic prosperity are both interdependent and contradictory. How does the public judge businesses then when they make decisions favoring one social objective over another? To answer this question, I qualitatively surveyed the American public at the start of the COVID-19 crisis about what they considered to be responsible and irresponsible business behavior in response to the pandemic. Analyzing their answers led me to create the 4R Model of Moral Sensemaking of Competing Social Problems.

The 4R Model relies on two dimensions: the extent to which people prioritize one social problem over another and the extent to which they exhibit psychological discomfort (i.e. cognitive dissonance). In the first mode, Reconcile, people view the problems as compatible. There is no need to prioritize then and no resulting dissonance. These people think, “Businesses can just convert to making masks to help the cause and still make a profit.”

The second mode, Resign, similarly does not prioritize one problem over another; however, the problems are seen as competing, suggesting a high level of cognitive dissonance. These people might say, “It’s dangerous to stay open, but if the business closes, people will lose their jobs. Both decisions are bad.”

In the third mode, Ranking, people use prioritizing to reduce cognitive dissonance. These people say things like, “I understand people will be fired, but it’s more important to stop the virus.”

In the fourth and final mode, Rectify, people start by ranking but show signs of lingering dissonance as they acknowledge the harm created by prioritizing one problem over another. Unlike with the Resign mode, they try to find ways to reduce this harm. A common response in this mode would be, “Businesses should shut down, but they should also try to help employees file for unemployment.”

The 4R model has strong implications for other grand challenges where there may be competing social objectives such as in addressing climate change. To this end, the typology helps corporate social responsibility (CSR) decision-makers understand how they may be judged when businesses are forced to re- or de-prioritize CSR dimensions. In other words, it helps us understand how people make moral sense of business behavior when the right thing to do is paradoxically also the wrong thing…(More)”

When What’s Right Is Also Wrong: The Pandemic As A Corporate Social Responsibility Paradox

Paper by Anthony Cintron Roman, Kevin Xu, Arfon Smith, Jehu Torres Vega, Caleb Robinson, Juan M Lavista Ferres: “GitHub is the world’s largest platform for collaborative software development, with over 100 million users. GitHub is also used extensively for open data collaboration, hosting more than 800 million open data files, totaling 142 terabytes of data. This study highlights the potential of open data on GitHub and demonstrates how it can accelerate AI research. We analyze the existing landscape of open data on GitHub and the patterns of how users share datasets. Our findings show that GitHub is one of the largest hosts of open data in the world and has experienced an accelerated growth of open data assets over the past four years. By examining the open data landscape on GitHub, we aim to empower users and organizations to leverage existing open datasets and improve their discoverability — ultimately contributing to the ongoing AI revolution to help address complex societal issues. We release the three datasets that we have collected to support this analysis as open datasets at this https URL…(More)”

Open Data on GitHub: Unlocking the Potential of AI

Paper by Marc Cheong, Raula Gaikovina Kula, and Christoph Treude: “A key drawback to using a Open Source third-party library is the risk of introducing malicious attacks. In recently times, these threats have taken a new form, when maintainers turn their Open Source libraries into protestware. This is defined as software containing political messages delivered through these libraries, which can either be malicious or benign. Since developers are willing to freely open-up their software to these libraries, much trust and responsibility are placed on the maintainers to ensure that the library does what it promises to do. This paper takes a look into the possible scenarios where developers might consider turning their Open Source Software into protestware, using an ethico-philosophical lens. Using different frameworks commonly used in AI ethics, we explore the different dilemmas that may result in protestware. Additionally, we illustrate how an open-source maintainer’s decision to protest is influenced by different stakeholders (viz., their membership in the OSS community, their personal views, financial motivations, social status, and moral viewpoints), making protestware a multifaceted and intricate matter…(More)”

Ethical Considerations Towards Protestware

Updated edition of book by Martin Campbell-Kelly, William Aspray, Nathan Ensmenger, Jeffrey R. Yost; “…traces the history of the computer and shows how business and government were the first to explore its unlimited, information-processing potential. Old-fashioned entrepreneurship combined with scientific know-how inspired now famous computer engineers to create the technology that became IBM. Wartime needs drove the giant ENIAC, the first fully electronic computer. Later, the PC enabled modes of computing that liberated people from room-sized, mainframe computers.

This third edition provides updated analysis on software and computer networking, including new material on the programming profession, social networking, and mobile computing. It expands its focus on the IT industry with fresh discussion on the rise of Google and Facebook as well as how powerful applications are changing the way we work, consume, learn, and socialize. Computer is an insightful look at the pace of technological advancement and the seamless way computers are integrated into the modern world. Through comprehensive history and accessible writing, Computer is perfect for courses on computer history, technology history, and information and society, as well as a range of courses in the fields of computer science, communications, sociology, and management…(More)”.

Computer: A History of the Information Machine

Report by Suzie Dunn, Tracy Vaillancourt and Heather Brittain: “Various forms of digital technology are being used to inflict significant harms online. This is a pervasive issue in online interactions, in particular with regard to technology-facilitated gender-based violence (TFGBV) and technology-facilitated violence (TFV) against LGBTQ+ people. This modern form of violence perpetuates gender inequality and discrimination against LGBTQ+ people and has significant impacts on its targets.

As part of a multi-year research project Supporting a Safer Internet (in partnership with the International Development Research Centre) exploring the prevalence and impacts of TFGBV experienced by women, transgender, gender non-conforming and gender-diverse people, as well as TFV against LGBTQ+ individuals, an international survey was conducted by Ipsos on behalf of the Centre for International Governance Innovation (CIGI). The survey examined the influence of gender and sexual orientation on people’s experiences with online harms, with a focus on countries in the Global South. Data was collected from 18,149 people of all genders in 18 countries.

The special report provides background information on TFGBV and TFV against LGBTQ+ people by summarizing some of the existing research on the topic. It then presents the quantitative data collected on people’s experiences with, and opinions on, online harms. A list of recommendations is provided for governments, technology companies, academics, researchers and civil society organizations on how they can contribute to addressing and ending TFV…(More)”

(Read the Supporting Safer Digital Spaces: Highlights here.; Read the French translation of the Highlights here.)

Supporting Safer Digital Spaces

Get the latest news right in you inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday