Explore our articles
View All Results

Stefaan Verhulst

The Administrative Conference of the United States: “Artificial intelligence (AI) promises to transform how government agencies do their work. Rapid developments in AI have the potential to reduce the cost of core governance functions, improve the quality of decisions, and unleash the power of administrative data, thereby making government performance more efficient and effective. Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, their own capacity to learn over time using AI, and whether the use of AI is even permitted.

These are important issues for public debate and academic inquiry. Yet little is known about how agencies are currently using AI systems beyond a few headlinegrabbing examples or surface-level descriptions. Moreover, even amidst growing public and  scholarly discussion about how society might regulate government use of AI, little attention has been devoted to how agencies acquire such tools in the first place or oversee their use. In an effort to fill these gaps, the Administrative Conference of the United States (ACUS) commissioned this report from researchers at Stanford University and New York University. The research team included a diverse set of lawyers, law students, computer scientists, and social scientists with the capacity to analyze these cutting-edge issues from technical, legal, and policy angles. The resulting report offers three cuts at federal agency use of AI:

  • a rigorous canvass of AI use at the 142 most significant federal departments, agencies, and sub-agencies (Part I)
  • a series of in-depth but accessible case studies of specific AI applications at seven leading agencies covering a range of governance tasks (Part II); and
  • a set of cross-cutting analyses of the institutional, legal, and policy challenges raised by agency use of AI (Part III)….(More)”
Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies

Editorial at Nature: “Everyone’s talking about reproducibility — or at least they are in the biomedical and social sciences. The past decade has seen a growing recognition that results must be independently replicated before they can be accepted as true.

A focus on reproducibility is necessary in the physical sciences, too — an issue explored in this month’s Nature Physics, in which two metrologists argue that reproducibility should be viewed through a different lens. When results in the science of measurement cannot be reproduced, argue Martin Milton and Antonio Possolo, it’s a sign of the scientific method at work — and an opportunity to promote public awareness of the research process (M. J. T. Milton and A. Possolo Nature Phys26, 117–119; 2020)….

However, despite numerous experiments spanning three centuries, the precise value of G remains uncertain. The root of the uncertainty is not fully understood: it could be due to undiscovered errors in how the value is being measured; or it could indicate the need for new physics. One scenario being explored is that G could even vary over time, in which case scientists might have to revise their view that it has a fixed value.

If that were to happen — although physicists think it unlikely — it would be a good example of non-reproduced data being subjected to the scientific process: experimental results questioning a long-held theory, or pointing to the existence of another theory altogether.

Questions in biomedicine and in the social sciences do not reduce so cleanly to the determination of a fundamental constant of nature. Compared with metrology, experiments to reproduce results in fields such as cancer biology are likely to include many more sources of variability, which are fiendishly hard to control for.

But metrology reminds us that when researchers attempt to reproduce the results of experiments, they do so using a set of agreed — and highly precise — experimental standards, known in the measurement field as metrological traceability. It is this aspect, the authors contend, that helps to build trust and confidence in the research process….(More)”.

Irreproducibility is not a sign of failure, but an inspiration for fresh ideas

Book edited by Yoshiki Yamagata and Perry P.J. Yang: “…shows how to design, model and monitor smart communities using a distinctive IoT-based urban systems approach. Focusing on the essential dimensions that constitute smart communities energy, transport, urban form, and human comfort, this helpful guide explores how IoT-based sharing platforms can achieve greater community health and well-being based on relationship building, trust, and resilience. Uncovering the achievements of the most recent research on the potential of IoT and big data, this book shows how to identify, structure, measure and monitor multi-dimensional urban sustainability standards and progress.

This thorough book demonstrates how to select a project, which technologies are most cost-effective, and their cost-benefit considerations. The book also illustrates the financial, institutional, policy and technological needs for the successful transition to smart cities, and concludes by discussing both the conventional and innovative regulatory instruments needed for a fast and smooth transition to smart, sustainable communities….(More)”.

Urban Systems Design Creating Sustainable Smart Cities in the Internet of Things Era

Rana Foroohar at the Financial Times: “…A report by a Swedish research group called V-Dem found Taiwan was subject to more disinformation than nearly any other country, much of it coming from mainland China. Yet the popularity of pro-independence politicians is growing there, something Ms Tang views as a circular phenomenon.

When politicians enable more direct participation, the public begins to have more trust in government. Rather than social media creating “a false sense of us versus them,” she notes, decentralised technologies have “enabled a sense of shared reality” in Taiwan.

The same seems to be true in a number of other countries, including Israel, where Green party leader and former Occupy activist Stav Shaffir crowdsourced technology expertise to develop a bespoke data analysis app that allowed her to make previously opaque Treasury data transparent. She’s now heading an OECD transparency group to teach other politicians how to do the same. Part of the power of decentralised technologies is that they allow, at scale, the sort of public input on a wide range of complex issues that would have been impossible in the analogue era.

Consider “quadratic voting”, a concept that has been popularised by economist Glen Weyl, co-author of Radical Markets: Uprooting Capitalism and Democracy for a Just Society. Mr Weyl is the founder of the RadicalxChange movement, which aimsto empower a more participatory democracy. Unlike a binary “yes” or “no” vote for or against one thing, quadratic voting allows a large group of people to use a digital platform to express the strength of their desire on a variety of issues.

For example, when he headed the appropriations committee in the Colorado House of Representatives, Chris Hansen used quadratic voting to help his party quickly sort through how much of their $40m budget should be allocated to more than 100 proposals….(More)”.

Digital tools can be a useful bolster to democracy

Essay by Syon P. Bhanot and Elizabeth Linos: “The last decade has seen remarkable growth in the field of behavioral public administration, both in practice and in academia. In both domains, applications of behavioral science to policy problems have moved forward at breakneck speed; researchers are increasingly pursuing randomized behavioral interventions in public administration contexts, editors of peer‐reviewed academic journals are showing greater interest in publishing this work, and policy makers at all levels are creating new initiatives to bring behavioral science into the public sector.

However, because the expansion of the field has been so rapid, there has been relatively little time to step back and reflect on the work that has been done and to assess where the field is going in the future. It is high time for such reflection: where is the field currently on track, and where might it need course correction?…(More)”.

Behavioral Public Administration: : Past, Present, and Future

Book by David J. Hand: “In the era of big data, it is easy to imagine that we have all the information we need to make good decisions. But in fact the data we have are never complete, and may be only the tip of the iceberg. Just as much of the universe is composed of dark matter, invisible to us but nonetheless present, the universe of information is full of dark data that we overlook at our peril. In Dark Data, data expert David Hand takes us on a fascinating and enlightening journey into the world of the data we don’t see.

Dark Data explores the many ways in which we can be blind to missing data and how that can lead us to conclusions and actions that are mistaken, dangerous, or even disastrous. Examining a wealth of real-life examples, from the Challenger shuttle explosion to complex financial frauds, Hand gives us a practical taxonomy of the types of dark data that exist and the situations in which they can arise, so that we can learn to recognize and control for them. In doing so, he teaches us not only to be alert to the problems presented by the things we don’t know, but also shows how dark data can be used to our advantage, leading to greater understanding and better decisions.

Today, we all make decisions using data. Dark Data shows us all how to reduce the risk of making bad ones….(More)”.

Dark Data: Why What You Don’t Know Matters

Report by Aleksandra Berditchevskaia and Peter Baek: “When it comes to artificial intelligence (AI), the dominant media narratives often end up taking one of two opposing stances: AI is the saviour or the villain. Whether it is presented as the technology responsible for killer robots and mass job displacement or the one curing all disease and halting the climate crisis, it seems clear that AI will be a defining feature of our future society. However, these visions leave little room for nuance and informed public debate. They also help propel the typical trajectory followed by emerging technologies; with inevitable regularity we observe the ascent of new technologies to the peak of inflated expectations they will not be able to fulfil, before dooming them to a period languishing in the trough of disillusionment.[1]

There is an alternative vision for the future of AI development. By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way. Clearly defining the problems we want to address and focusing on solutions that result in the most collective benefit can lead us towards a better relationship between machine and human intelligence. By considering AI in the context of large-scale participatory projects across areas such as citizen science, crowdsourcing and participatory digital democracy, we can both amplify what it is possible to achieve through collective effort and shape the future trajectory of machine intelligence. We call this 21st-century collective intelligence (CI).

In The Future of Minds and Machines we introduce an emerging framework for thinking about how groups of people interface with AI and map out the different ways that AI can add value to collective human intelligence and vice versa. The framework has, in large part, been developed through analysis of inspiring projects and organisations that are testing out opportunities for combining AI & CI in areas ranging from farming to monitoring human rights violations. Bringing together these two fields is not easy. The design tensions identified through our research highlight the challenges of navigating this opportunity and selecting the criteria that public sector decision-makers should consider in order to make the most of solving problems with both minds and machines….(More)”.

The Future of Minds and Machines

Louise Lief at Stanford Social Innovation Review: “Today, data governs almost every aspect of our lives, shaping the opportunities we have, how we perceive reality and understand problems, and even what we believe to be possible. Philanthropy is particularly data driven, relying on it to inform decision-making, define problems, and measure impact. But what happens when data design and collection methods are flawed, lack context, or contain critical omissions and misdirected questions? With bad data, data-driven strategies can misdiagnose problems and worsen inequities with interventions that don’t reflect what is needed.

Data justice begins by asking who controls the narrative. Who decides what data is collected and for which purpose? Who interprets what it means for a community? Who governs it? In recent years, affected communities, social justice philanthropists, and academics have all begun looking deeper into the relationship between data and social justice in our increasingly data-driven world. But philanthropy can play a game-changing role in developing practices of data justice to more accurately reflect the lived experience of communities being studied. Simply incorporating data justice principles into everyday foundation practice—and requiring it of grantees—would be transformative: It would not only revitalize research, strengthen communities, influence policy, and accelerate social change, it would also help address deficiencies in current government data sets.

When Data Is Flawed

Some of the most pioneering work on data justice has been done by Native American communities, who have suffered more than most from problems with bad data. A 2017 analysis of American Indian data challenges—funded by the W.K. Kellogg Foundation and the Morris K. Udall and Stewart L. Udall Foundation—documented how much data on Native American communities is of poor quality, inaccurate, inadequate, inconsistent, irrelevant, and/or inaccessible. The National Congress of American Indians even described American Native communities as “The Asterisk Nation,” because in many government data sets they are represented only by an asterisk denoting sampling errors instead of data points.

Where it concerns Native Americans, data is often not standardized and different government databases identify tribal members at least seven different ways using different criteria; federal and state statistics often misclassify race and ethnicity; and some data collection methods don’t allow tribes to count tribal citizens living off the reservation. For over a decade the Department of the Interior’s Bureau of Indian Affairs has struggled to capture the data it needs for a crucial labor force report it is legally required to produce; methodology errors and reporting problems have been so extensive that at times it prevented the report from even being published. But when the Department of the Interior changed several reporting requirements in 2014 and combined data submitted by tribes with US Census data, it only compounded the problem, making historical comparisons more difficult. Moreover, Native Americans have charged that the Census Bureau significantly undercounts both the American Indian population and key indicators like joblessness….(More)”.

How Philanthropy Can Help Lead on Data Justice

Chaya Nayak at Facebook: “In 2018, Facebook began an initiative to support independent academic research on social media’s role in elections and democracy. This first-of-its-kind project seeks to provide researchers access to privacy-preserving data sets in order to support research on these important topics.

Today, we are announcing that we have substantially increased the amount of data we’re providing to 60 academic researchers across 17 labs and 30 universities around the world. This release delivers on the commitment we made in July 2018 to share a data set that enables researchers to study information and misinformation on Facebook, while also ensuring that we protect the privacy of our users.

This new data release supplants data we released in the fall of 2019. That 2019 data set consisted of links that had been shared publicly on Facebook by at least 100 unique Facebook users. It included information about share counts, ratings by Facebook’s third-party fact-checkers, and user reporting on spam, hate speech, and false news associated with those links. We have expanded the data set to now include more than 38 million unique links with new aggregated information to help academic researchers analyze how many people saw these links on Facebook and how they interacted with that content – including views, clicks, shares, likes, and other reactions. We’ve also aggregated these shares by age, gender, country, and month. And, we have expanded the time frame covered by the data from January 2017 – February 2019 to January 2017 – August 2019.

With this data, researchers will be able to understand important aspects of how social media shapes our world. They’ll be able to make progress on the research questions they proposed, such as “how to characterize mainstream and non-mainstream online news sources in social media” and “studying polarization, misinformation, and manipulation across multiple platforms and the larger information ecosystem.”

In addition to the data set of URLs, researchers will continue to have access to CrowdTangle and Facebook’s Ad Library API to augment their analyses. Per the original plan for this project, outside of a limited review to ensure that no confidential or user data is inadvertently released, these researchers will be able to publish their findings without approval from Facebook.

We are sharing this data with researchers while continuing to prioritize the privacy of people who use our services. This new data set, like the data we released before it, is protected by a method known as differential privacy. Researchers have access to data tables from which they can learn about aggregated groups, but where they cannot identify any individual user. As Harvard University’s Privacy Tools project puts it:

“The guarantee of a differentially private algorithm is that its behavior hardly changes when a single individual joins or leaves the dataset — anything the algorithm might output on a database containing some individual’s information is almost as likely to have come from a database without that individual’s information. … This gives a formal guarantee that individual-level information about participants in the database is not leaked.” …(More)”

New privacy-protected Facebook data for independent research on social media’s impact on democracy

Rebecca Ruiz at Mashable: “Since its founding in 2013, the free mental health support service Crisis Text Line has focused on using data and technology to better aid those who reach out for help. 

Unlike helplines that offer assistance based on the order in which users dialed, texted, or messaged, Crisis Text Line has an algorithm that determines who is in most urgent need of counseling. The nonprofit is particularly interested in learning which emoji and words texters use when their suicide risk is high, so as to quickly connect them with a counselor. Crisis Text Line just released new insights about those patterns. 

Based on its analysis of 129 million messages processed between 2013 and the end of 2019, the nonprofit found that the pill emoji, or ?, was 4.4 times more likely to end in a life-threatening situation than the word suicide. 

Other words that indicate imminent danger include 800mg, acetaminophen, excedrin, and antifreeze; those are two to three times more likely than the word suicide to involve an active rescue of the texter. The loudly crying emoji face, or ?, is similarly high-risk. In general, the words that trigger the greatest alarm suggest the texter has a method or plan to attempt suicide or may be in the process of taking their own life. …(More)”.

This emoji could mean your suicide risk is high, according to AI

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday