Building trust in AI systems is essential


Editorial Board of the Financial Times: “…Most of the biggest tech companies, which have been at the forefront of the AI revolution, are well aware of the risks of deploying flawed systems at scale. Tech companies publicly acknowledge the need for societal acceptance if their systems are to be trusted. Although historically allergic to government intervention, some industry bosses are even calling for stricter regulation in areas such as privacy and facial recognition technology.

A parallel is often drawn between two conferences held in Asilomar, California, in 1975 and 2017. At the first, a group of biologists, lawyers and doctors created a set of ethical guidelines around research into recombinant DNA. This opened an era of responsible and fruitful biomedical research that has helped us deal with the Covid-19 pandemic today. Inspired by the example, a group of AI experts repeated the exercise 42 years later and came up with an impressive set of guidelines for the beneficial use of the technology. 

Translating such high principles into everyday practice is hard, especially when so much money is at stake. But three rules should always apply. First, teams that develop AI systems must be as diverse as possible to reduce the risk of bias. Second, complex AI systems should never be deployed in any field unless they offer a demonstrable improvement on what already exists. Third, algorithms that companies and governments deploy in sensitive areas such as healthcare, education, policing, justice and workplace monitoring should be subject to audit and comprehension by outside experts. 

The US Congress has been considering an Algorithmic Accountability Act, which would compel companies to assess the probable real-world impact of automated decision-making systems. There is even a case for creating the algorithmic equivalent of the US Food and Drug Administration to preapprove the use of AI in sensitive areas. Criminal liability for those who deploy irresponsible AI systems might also help concentrate minds.

The AI industry has talked a good game about AI ethics. But if some of the most sophisticated companies in this field cannot even convince their own employees of their good intentions, they will struggle to convince anyone else. That could result in a fierce public backlash against companies using AI. Worse, it may yet impede the real benefits of using AI for societal good in areas such as healthcare. The tech sector has to restore credibility for all our sakes….(More)”

Improving Governance by Asking Questions that Matter


Fiona Cece, Nicola Nixon and Stefaan Verhulst at the Open Government Partnership:

“You can tell whether a man is clever by his answers. You can tell whether a man is wise by his questions” – Naguib Mahfouz

Data is at the heart of every dimension of the COVID-19 challenge. It’s been vital in the monitoring of daily rates, track and trace technologies, doctors appointments, and the vaccine roll-out. Yet our daily diet of brightly-coloured graphed global trends masks the maelstrom of inaccuracies, gaps and guesswork that underlies the ramshackle numbers on which they are so often based. Governments are unable to address their citizens’ needs in an informed way when the data itself is partial, incomplete or simply biased. And citizens’ in turn are unable to contribute to collective decision-making that impacts their lives when the channels for doing so in meaningful ways are largely non-existent. 

There is an irony here. We live in an era in which there are an unprecedented number of methods for collecting data. Even in the poorest countries with weak or largely non-existent government systems, anyone with a mobile phone or who accesses the internet is using and producing data. Yet a chasm exists between the potential of data to contribute to better governance and what it is actually collected and used for.

Even where data accuracy can be relied upon, the practice of effective, efficient and equitable data governance requires much more than its collection and dissemination.

And although governments will play a vital role, combatting the pandemic and its associated socio-economic challenges will require the combined efforts of non-government organizations (NGOs), civil society organizations (CSOs), citizens’ associations, healthcare companies and providers, universities, think tanks and so many others. Collaboration is key.

There is a need to collectively move beyond solution-driven thinking. One initiative working toward this end is The 100 Questions Initiative by The Governance Lab (The GovLab) at the NYU Tandon School of Engineering. In partnership with the The Asia Foundation, the Centre for Strategic and International Studies in Indonesia, and the BRAC Institute of Governance and Development, the Initiative is launching a Governance domain. Collectively we will draw on the expertise of over 100 “bilinguals”– experts in both data science and governance — to identify the 10 most-pressing questions on a variety of issues that can be addressed using data and data science. The cohort for this domain is multi-sectoral and geographically varied, and will provide diverse input on these governance challenges. 

Once the questions have been identified and prioritized, and we have engaged with a broader public through a voting campaign, the ultimate goal is to establish one or more data collaboratives that can generate answers to the questions at hand. Data collaboratives are an emerging structure that allow pooling of data and expertise across sectors, often resulting in new insights and public sector innovations.  Data collaboratives are fundamentally about sharing and cross-sectoral engagement. They have been deployed across countries and sectoral contexts, and their relative success shows that in the twenty-first century no single actor can solve vexing public problems. The route to success lies through broad-based collaboration. 

Multi-sectoral and geographically diverse insight is needed to address the governance challenges we are living through, especially during the time of COVIDd-19. The pandemic has exposed weak governance practices globally, and collectively we need to craft a better response. As an open governance and data-for-development community, we have not yet leveraged the best insight available to inform an effective, evidence-based response to the pandemic. It is time we leverage more data and technology to enable citizen-centrism in our service delivery and decision-making processes, to contribute to overcoming the pandemic and to building our governance systems, institutions and structures back better. Together with over 130 ‘Bilinguals’ – experts in both governance and data – we have set about identifying the priority questions that data can answer to improve governance. Join us on this journey. Stay tuned for our public voting campaign in a couple of months’ time when we will crowdsource your views on which of the questions they pose really matter….(More)”.

The Landscape of Big Data and Gender


Report by Data2X: “This report draws out six observations about trends in big data and gender:

– The current environment COVID-19 and the global economic recession is stimulating groundbreaking gender research.

– Where we’re progressing, where we’re lagging Some gendered topics—especially mobility, health, and social norms—are increasingly well-studied through the combination of big data and traditional data. However, worrying gaps remain, especially around the subjects of economic opportunity, human security, and public participation.

– Capturing gender-representative samples using big data continues to be a challenge, but progress is being made.

– Large technology firms generate an immense volume of gender data critical for policymaking, and researchers are finding ways to reuse this data safely.

– Data collaboratives that bring private sector data-holders, researchers, and public policymakers together in a formal, enduring relationship can help big data make a practical difference in the lives of women and girls….(More)”

COVID vaccination studies: plan now to pool data, or be bogged down in confusion


Natalie Dean at Nature: “More and more COVID-19 vaccines are rolling out safely around the world; just last month, the United States authorized one produced by Johnson & Johnson. But there is still much to be learnt. How long does protection last? How much does it vary by age? How well do vaccines work against various circulating variants, and how well will they work against future ones? Do vaccinated people transmit less of the virus?

Answers to these questions will help regulators to set the best policies. Now is the time to make sure that those answers are as reliable as possible, and I worry that we are not laying the essential groundwork. Our current trajectory has us on course for confusion: we must plan ahead to pool data.

Many questions remain after vaccines are approved. Randomized trials generate the best evidence to answer targeted questions, such as how effective booster doses are. But for others, randomized trials will become too difficult as more and more people are vaccinated. To fill in our knowledge gaps, observational studies of the millions of vaccinated people worldwide will be essential….

Perhaps most importantly, we must coordinate now on plans to combine data. We must take measures to counter the long-standing siloed approach to research. Investigators should be discouraged from setting up single-site studies and encouraged to contribute to a larger effort. Funding agencies should favour studies with plans for collaborating or for sharing de-identified individual-level data.

Even when studies do not officially pool data, they should make their designs compatible with others. That means up-front discussions about standardization and data-quality thresholds. Ideally, this will lead to a minimum common set of variables to be collected, which the WHO has already hammered out for COVID-19 clinical outcomes. Categories include clinical severity (such as all infections, symptomatic disease or critical/fatal disease) and patient characteristics, such as comorbidities. This will help researchers to conduct meta-analyses of even narrow subgroups. Efforts are under way to develop reporting guidelines for test-negative studies, but these will be most successful when there is broad engagement.

There are many important questions that will be addressed only by observational studies, and data that can be combined are much more powerful than lone results. We need to plan these studies with as much care and intentionality as we would for randomized trials….(More)”.

How governments use evidence to make transport policy


Report by Alistair Baldwin, and Kelly Shuttleworth: “The government’s ambitious transport plans will falter unless policy makers – ministers, civil servants and other public officials – improve the way they identify and use evidence to inform their decisions.

This report compares the use of evidence in the UK, the Netherlands, Sweden, Germany and New Zealand, and finds that England is an outlier in not having a coordinated transport strategy. This damages both scrutiny and coordination of transport policy.

The government has plans to reform bus services, support cycling, review rail franchising, and invest more than £60 billion in transport projects over the next five years. But these plans are not integrated. The Department for Transport should develop a new strategy integrating different modes of transport, rather than mode by mode, to improve political understanding of trade-offs and scrutiny of policy decisions.

The DfT is a well-resourced department, with significant expertise, responsibilities and a wide array of analysts. But its reliance on economic evidence means other forms of evidence can appear neglected in transport decision making – including social research, evaluation or engineering. Decision makers are often too attached to the importance of the Benefit-Cost Ratio at the expense of other forms of evidence.

The government needs to improve its attitude to evaluation of past projects. There are successes – like the evaluation of the Cycle City Ambition Fund – but they are outnumbered by failures – like the evaluation of projects in the Local Growth Fund.  For example, good practice from Highways England should be common across the transport sector, helped by providing dedicated funding to local authorities to properly evaluate projects….(More)”.

Public Policy Analytics: Code and Context for Data Science in Government


Book by Ken Steif: “…reaches readers how to address complex public policy problems with data and analytics using reproducible methods in R. Each of the eight chapters provides a detailed case study, showing readers: how to develop exploratory indicators; understand ‘spatial process’ and develop spatial analytics; how to develop ‘useful’ predictive analytics; how to convey these outputs to non-technical decision-makers through the medium of data visualization; and why, ultimately, data science and ‘Planning’ are one and the same. A graduate-level introduction to data science, this book will appeal to researchers and data scientists at the intersection of data analytics and public policy, as well as readers who wish to understand how algorithms will affect the future of government….(More)”.

Do conversations end when people want them to?


Paper by Adam M. Mastroianni et al: “Do conversations end when people want them to? Surprisingly, behavioral science provides no answer to this fundamental question about the most ubiquitous of all human social activities. In two studies of 932 conversations, we asked conversants to report when they had wanted a conversation to end and to estimate when their partner (who was an intimate in Study 1 and a stranger in Study 2) had wanted it to end. Results showed that conversations almost never ended when both conversants wanted them to and rarely ended when even one conversant wanted them to and that the average discrepancy between desired and actual durations was roughly half the duration of the conversation. Conversants had little idea when their partners wanted to end and underestimated how discrepant their partners’ desires were from their own. These studies suggest that ending conversations is a classic “coordination problem” that humans are unable to solve because doing so requires information that they normally keep from each other. As a result, most conversations appear to end when no one wants them to….(More)”.

Theories of Choice: The Social Science and the Law of Decision Making


Book by Stefan Grundmann and Philipp Hacker: “Choice is a key concept of our time. It is a foundational mechanism for every legal order in societies that are, politically, constituted as democracies and, economically, built on the market mechanism. Thus, choice can be understood as an atomic structure that grounds core societal processes. In recent years, however, the debate over the right way to theorise choice—for example, as a rational or a behavioural type of decision making—has intensified. This collection therefore provides an in-depth discussion of the promises and perils of specific types of theories of choice. It shows how the selection of a specific theory of choice can make a difference for concrete legal questions, in particularly in the regulation of the digital economy or in choosing between market, firm, or network.

In its first part, the volume provides an accessible overview of the current debates about rational versus behavioural approaches to theories of choice. The remainder of the book structures the vast landscape of theories of choice along three main types: individual, collective, and organisational decision making. As theories of choice proliferate and become ever more sophisticated, however, the process of choosing an adequate theory of choice becomes increasingly intricate, too. This volume addresses this selection problem for the various legal arenas in which individual, organisational, and collective decisions matter. By drawing on economic, technological, political, and legal points of view, the volume shows which theories of choice are at the disposal of the legally relevant decision maker, and how they can be implemented for the solution of concrete legal problems….(More)

How Humans Judge Machines


Open Access Book by César A. Hidalgo et al : “How would you feel about losing your job to a machine? How about a tsunami alert system that fails? Would you react differently to acts of discrimination depending on whether they were carried out by a machine or by a human? What about public surveillance? How Humans Judge Machines compares people’s reactions to actions performed by humans and machines. Using data collected in dozens of experiments, this book reveals the biases that permeate human-machine interactions. Are there conditions in which we judge machines unfairly?

Is our judgment of machines affected by the moral dimensions of a scenario? Is our judgment of machine correlated with demographic factors such as education or gender? César Hidalgo and colleagues use hard science to take on these pressing technological questions. Using randomized experiments, they create revealing counterfactuals and build statistical models to explain how people judge artificial intelligence and whether they do it fairly. Through original research, How Humans Judge Machines bring us one step closer to understanding the ethical consequences of AI…(More)”.

How One State Managed to Actually Write Rules on Facial Recognition


Kashmir Hill at The New York Times: “Though police have been using facial recognition technology for the last two decades to try to identify unknown people in their investigations, the practice of putting the majority of Americans into a perpetual photo lineup has gotten surprisingly little attention from lawmakers and regulators. Until now.

Lawmakers, civil liberties advocates and police chiefs have debated whether and how to use the technology because of concerns about both privacy and accuracy. But figuring out how to regulate it is tricky. So far, that has meant an all-or-nothing approach. City Councils in Oakland, Portland, San FranciscoMinneapolis and elsewhere have banned police use of the technology, largely because of bias in how it works. Studies in recent years by MIT researchers and the federal government found that many facial recognition algorithms are most accurate for white men, but less so for everyone else.

At the same time, automated facial recognition has become a powerful investigative tool, helping to identify child molesters and, in a recent high-profile example, people who participated in the Jan. 6 riot at the Capitol. Law enforcement officials in Vermont want the state’s ban lifted because there “could be hundreds of kids waiting to be saved.”

That’s why a new law in Massachusetts is so interesting: It’s not all or nothing. The state managed to strike a balance on regulating the technology, allowing law enforcement to harness the benefits of the tool, while building in protections that might prevent the false arrests that have happened before….(More)”.