Analytical modelling and UK Government policy


Paper by Marie Oldfield & Ella Haig:  “In the last decade, the UK Government has attempted to implement improved processes and procedures in modelling and analysis in response to the Laidlaw report of 2012 and the Macpherson review of 2013. The Laidlaw report was commissioned after failings during the Intercity West Coast Rail (ICWC) Franchise procurement exercise by the Department for Transport (DfT) that led to a legal challenge of the analytical models used within the exercise. The Macpherson review looked into the quality assurance of Government analytical models in the context of the experience with the Intercity West Coast franchise competition. This paper examines what progress has been made in the 8 years since the Laidlaw report in model building and best practise in government and proposes several recommendations for ways forward. This paper also discusses the Lords Science and Technology Committees of June 2020 that analysed the failings in the modelling of COVID. Despite going on to influence policy, many of the same issues raised within the Laidlaw and Macpherson Reports were also present in the Lords Science and Technology Committee enquiry. We examine the technical and organisational challenges to progress in this area and make recommendations for a way forward….(More)”.

Government algorithms are out of control and ruin lives



Nani Jansen Reventlow at Open Democracy: “Government services are increasingly being automated and technology is relied on more and more to make crucial decisions about our lives and livelihoods. This includes decisions about what type of support we can access in times of need: welfarebenefits, and other government services.

Technology has the potential to not only reproduce but amplify structural inequalities in our societies. If you combine this drive for automation with a broader context of criminalising poverty and systemic racism, this can have disastrous effects.

A recent example is the ‘child benefits scandal’ that brought down the Dutch government at the start of 2021. In the Netherlands, working parents are eligible for a government contribution toward the costs of daycare. This can run up to 90% of the actual costs for those with a low income. While contributions are often directly paid to childcare providers, parents are responsible for them. This means that, if the tax authorities determine that any allowance was wrongfully paid out, parents are liable for repaying them.

To detect cases of fraud, the Dutch tax authorities used a system that was outright discriminatory. An investigation by the Dutch Data Protection Authority last year showed that parents were singled out for special scrutiny because of their ethnic origin or dual nationality.  “The whole system was organised in a discriminatory manner and was also used as such,” it stated.

The fallout of these ‘fraud detection’ efforts was enormous. It is currently estimated that 46,000 parents were wrongly accused of having fraudulently claimed child care allowances. Families were forced to repay tens of thousands of euros, leading to financial hardship, loss of livelihood, homes, and in one case, even loss of life – one parent died by suicide. While we can still hope that justice for these families won’t be denied, it will certainly be delayed: this weekend, it became clear that it could take up to ten years to handle all claims. An unacceptable timeline, given how precarious the situation will be for many of those affected….(More)”.

Transparency’s AI Problem


Paper by Hannah Bloch-Wehba: “A consensus seems to be emerging that algorithmic governance is too opaque and ought to be made more accountable and transparent. But algorithmic governance underscores the limited capacity of transparency law—the Freedom of Information Act and its state equivalents—to promote accountability. Drawing on the critical literature on “open government,” this Essay shows that algorithmic governance reflects and amplifies systemic weaknesses in the transparency regime, including privatization, secrecy, private sector cooptation, and reactive disclosure. These deficiencies highlight the urgent need to reorient transparency and accountability law toward meaningful public engagement in ongoing oversight. This shift requires rethinking FOIA’s core commitment to public disclosure of agency records, exploring instead alternative ways to empower the public and to shed light on decisionmaking. The Essay argues that new approaches to transparency and accountability for algorithmic governance should be independent of private vendors, and ought to adequately represent the interests of affected individuals and communities. These considerations, of vital importance for the oversight of automated systems, also hold broader lessons for efforts to recraft open government obligations in the public interest….(More)”

Facial Recognition Technology: Federal Law Enforcement Agencies Should Better Assess Privacy and Other Risks


Report by the U.S. Government Accountability Office: “GAO surveyed 42 federal agencies that employ law enforcement officers about their use of facial recognition technology. Twenty reported owning systems with facial recognition technology or using systems owned by other entities, such as other federal, state, local, and non-government entities (see figure).

Ownership and Use of Facial Recognition Technology Reported by Federal Agencies that Employ Law Enforcement Officers

HLP_5 - 103705

Note: For more details, see figure 2 in GAO-21-518.

Agencies reported using the technology to support several activities (e.g., criminal investigations) and in response to COVID-19 (e.g., verify an individual’s identity remotely). Six agencies reported using the technology on images of the unrest, riots, or protests following the death of George Floyd in May 2020. Three agencies reported using it on images of the events at the U.S. Capitol on January 6, 2021. Agencies said the searches used images of suspected criminal activity.

All fourteen agencies that reported using the technology to support criminal investigations also reported using systems owned by non-federal entities. However, only one has awareness of what non-federal systems are used by employees. By having a mechanism to track what non-federal systems are used by employees and assessing related risks (e.g., privacy and accuracy-related risks), agencies can better mitigate risks to themselves and the public….GAO is making two recommendations to each of 13 federal agencies to implement a mechanism to track what non-federal systems are used by employees, and assess the risks of using these systems. Twelve agencies concurred with both recommendations. U.S. Postal Service concurred with one and partially concurred with the other. GAO continues to believe the recommendation is valid, as described in the report….(More)”.

Ethics and governance of artificial intelligence for health


The WHO guidance…”on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health.  While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.

The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use…(More)”

National strategies on Artificial Intelligence: A European perspective


Report by European Commission’s Joint Research Centre (JRC) and the OECD’s Science Technology and Innovation Directorate: “Artificial intelligence (AI) is transforming the world in many aspects. It is essential for Europe to consider how to make the most of the opportunities from this transformation and to address its challenges. In 2018 the European Commission adopted the Coordinated Plan on Artificial Intelligence that was developed together with the Member States to maximise the impact of investments at European Union (EU) and national levels, and to encourage synergies and cooperation across the EU.

One of the key actions towards these aims was an encouragement for the Member States to develop their national AI strategies.The review of national strategies is one of the tasks of AI Watch launched by the European Commission to support the implementation of the Coordinated Plan on Artificial Intelligence.

Building on the 2020 AI Watch review of national strategies, this report presents an updated review of national AI strategies from the EU Member States, Norway and Switzerland. By June 2021, 20 Member States and Norway had published national AI strategies, while 7 Member States were in the final drafting phase. Since the 2020 release of the AI Watch report, additional Member States – i.e. Bulgaria, Hungary, Poland, Slovenia, and Spain – published strategies, while Cyprus, Finland and Germany have revised the initial strategies.

This report provides an overview of national AI policies according to the following policy areas: Human capital, From the lab to the market, Networking, Regulation, and Infrastructure. These policy areas are consistent with the actions proposed in the Coordinated Plan on Artificial Intelligence and with the policy recommendations to governments contained in the OECD Recommendation on AI. The report also includes a section on AI policies to address societal challenges of the COVID-19 pandemic and climate change….(More)”.

To regulate AI, try playing in a sandbox


Article by Dan McCarthy: “For an increasing number of regulators, researchers, and tech developers, the word “sandbox” is just as likely to evoke rulemaking and compliance as it is to conjure images of children digging, playing, and building. Which is kinda the point.

That’s thanks to the rise of regulatory sandboxes, which allow organizations to develop and test new technologies in a low-stakes, monitored environment before rolling them out to the general public. 

Supporters, from both the regulatory and the business sides, say sandboxes can strike the right balance of reining in potentially harmful technologies without kneecapping technological progress. They can also help regulators build technological competency and clarify how they’ll enforce laws that apply to tech. And while regulatory sandboxes originated in financial services, there’s growing interest in using them to police artificial intelligence—an urgent task as AI is expanding its reach while remaining largely unregulated. 

Even for all of its promise, experts told us, the approach should be viewed not as a silver bullet for AI regulation, but instead as a potential step in the right direction. 

Rashida Richardson, an AI researcher and visiting scholar at Rutgers Law School, is generally critical of AI regulatory sandboxes, but still said “it’s worth testing out ideas like this, because there is not going to be any universal model to AI regulation, and to figure out the right configuration of policy, you need to see theoretical ideas in practice.” 

But waiting for the theoretical to become concrete will take time. For example, in April, the European Union proposed AI regulation that would establish regulatory sandboxes to help the EU achieve its aim of responsible AI innovation, mentioning the word “sandbox” 38 times, compared to related terms like “impact assessment” (13 mentions) and “audit” (four). But it will likely take years for the EU’s proposal to become law. 

In the US, some well-known AI experts are working on an AI sandbox prototype, but regulators are not yet in the picture. However, the world’s first and (so far) only AI-specific regulatory sandbox did roll out in Norway this March, as a way to help companies comply with AI-specific provisions of the EU’s General Data Protection Regulation (GDPR). The project provides an early window into how the approach can work in practice.

“It’s a place for mutual learning—if you can learn earlier in the [product development] process, that is not only good for your compliance risk, but it’s really great for building a great product,” according to Erlend Andreas Gjære, CEO and cofounder of Secure Practice, an information security (“infosec”) startup that is one of four participants in Norway’s new AI regulatory sandbox….(More)”

How Does Artificial Intelligence Work?


BuiltIn: “Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: “Can machines think?” 

Turing’s paper “Computing Machinery and Intelligence” (1950), and its subsequent Turing Test, established the fundamental goal and vision of artificial intelligence.   

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.

The expansive goal of artificial intelligence has given rise to many questions and debates. So much so, that no singular definition of the field is universally accepted.  

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn’t actually explain what artificial intelligence is? What makes a machine intelligent?

In their groundbreaking textbook Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the question by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.” (Russel and Norvig viii)

Norvig and Russell go on to explore four different approaches that have historically defined the field of AI: 

  1. Thinking humanly
  2. Thinking rationally
  3. Acting humanly 
  4. Acting rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.” (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AI as  “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”…(More)”.

Tasks, Automation, and the Rise in US Wage Inequality


Paper by Daron Acemoglu & Pascual Restrepo: “We document that between 50% and 70% of changes in the US wage structure over the last four decades are accounted for by the relative wage declines of worker groups specialized in routine tasks in industries experiencing rapid automation. We develop a conceptual framework where tasks across a number of industries are allocated to different types of labor and capital. Automation technologies expand the set of tasks performed by capital, displacing certain worker groups from employment opportunities for which they have comparative advantage. This framework yields a simple equation linking wage changes of a demographic group to the task displacement it experiences.

We report robust evidence in favor of this relationship and show that regression models incorporating task displacement explain much of the changes in education differentials between 1980 and 2016. Our task displacement variable captures the effects of automation technologies (and to a lesser degree offshoring) rather than those of rising market power, markups or deunionization, which themselves do not appear to play a major role in US wage inequality. We also propose a methodology for evaluating the full general equilibrium effects of task displacement (which include induced changes in industry composition and ripple effects as tasks are reallocated across different groups). Our quantitative evaluation based on this methodology explains how major changes in wage inequality can go hand-in-hand with modest productivity gains….(More)”.

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems


Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.
NIST’s new publication proposes a list of nine factors that contribute to a human’s potential trust in an AI system. A person may weigh the nine factors differently depending on both the task itself and the risk involved in trusting the AI’s decision. As an example, two different AI programs — a music selection algorithm and an AI that assists with cancer diagnosis — may score the same on all nine criteria. Users, however, might be inclined to trust the music selection algorithm but not the medical assistant, which is performing a far riskier task.Credit: N. Hanacek/NIST

National Institute of Standards and Technology (NIST): ” Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations? 

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. 

The report contributes to the broader NIST effort to help advance trustworthy AI systems. The focus of this latest publication is to understand how humans experience trust as they use or are affected by AI systems….(More)”.