Blame the politicians, not the technology, for A-level fiasco


The Editorial Board at the Financial Times: “The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure. The algorithm used to “moderate” teacher-assessed grades had no agency and delivered exactly what it was designed to do.

It is politicians and educational officials who are responsible for the government’s latest fiasco and should be the target of students’ criticism….

Sensibly designed, computer algorithms could have been used to moderate teacher assessments in a constructive way. Using past school performance data, they could have highlighted anomalies in the distribution of predicted grades between and within schools. That could have led to a dialogue between Ofqual, the exam regulator, and anomalous schools to come up with more realistic assessments….

There are broader lessons to be drawn from the government’s algo fiasco about the dangers of automated decision-making systems. The inappropriate use of such systems to assess immigration status, policing policies and prison sentencing decisions is a live danger. In the private sector, incomplete and partial data sets can also significantly disadvantage under-represented groups when it comes to hiring decisions and performance measures.

Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts. The Royal Statistical Society and The Alan Turing Institute certainly have the expertise to give a Kitemark of approval or flag concerns.

As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes. We often talk about a deficit of trust in our societies. But we should also be aware of the dangers of over-trusting technology. That may be a good essay subject for next year’s philosophy A-level….(More)”.

Landlord Tech Watch


About: “Landlord Tech—what the real estate industry describes as residential property technology, is leading to new forms of housing injustice. Property technology, or “proptech,” has grown dramatically since 2008, and applies to residential, commercial, and industrial buildings, effectively merging the real estate, technology, and finance industries. By employing digital surveillance, data collection, data accumulation, artificial intelligence, dashboards, and platform real estate in tenant housing and neighborhoods, Landlord Tech increases the power of landlords while disempowering tenants and those seeking shelter.

There are few laws and regulations governing the collection and use of data in the context of Landlord Tech. Because it is generally sold to landlords and property managers, not tenants, Landlord Tech is often installed without notifying or discussing potential harms with tenants and community members. These harms include the possibility that sensitive and personal data can be handed over to the police, ICE, or other law enforcement and government agencies. Landlord Tech can also be used to automate evictions, racial profiling, and tenant harassment. In addition, Landlord Tech is used to abet real estate speculation and gentrification, making buildings more desirable to whiter and wealthier tenants, while feeding real estate and tech companies with property – be that data or real estate. Landlord Tech tracking platforms have increasingly been marketed to landlords as solutions to Covid-19, leading to new forms of residential surveillance….(More)”.

An Introduction to Ethics in Robotics and AI


Open access book by Christoph Bartneck, Christoph Lütge, Alan Wagner and Sean Welsh: “This book provides an introduction into the ethics of robots and artificial intelligence. The book was written with university students, policy makers, and professionals in mind but should be accessible for most adults. The book is meant to provide balanced and, at times, conflicting viewpoints as to the benefits and deficits of AI through the lens of ethics. As discussed in the chapters that follow, ethical questions are often not cut and dry. Nations, communities, and individuals may have unique and important perspectives on these topics that should be heard and considered. While the voices that compose this book are our own, we have attempted to represent the views of the broader AI, robotics, and ethics communities.

This book provides an introduction into the ethics of robots and artificial intelligence. The book was written with university students, policy makers, and professionals in mind but should be accessible for most adults. The book is meant to provide balanced and, at times, conflicting viewpoints as to the benefits and deficits of AI through the lens of ethics. As discussed in the chapters that follow, ethical questions are often not cut and dry. Nations, communities, and individuals may have unique and important perspectives on these topics that should be heard and considered. While the voices that compose this book are our own, we have attempted to represent the views of the broader AI, robotics, and ethics communities….(More)”.

Prediction paradigm: the human price of instrumentalism


Editorial by Karamjit S. Gill at AI&Society: “Reflecting on the rise of instrumentalism, we learn how it has travelled across the academic boundary to the high-tech culture of Silicon Valley. At its core lies the prediction paradigm. Under the cloak of inevitability of technology, we are being offered the prediction paradigm as the technological dream of public safety, national security, fraud detection, and even disease control and diagnosis. For example, there are offers of facial recognition systems for predicting behaviour of citizens, offers of surveillance drones for ’biometric readings’, ‘Predictive Policing’ is offered as an effective tool to predict and reduce crime rates. A recent critical review of the prediction technology (Coalition for Critical Technology 2020), brings to our notice the discriminatory consequences of predicting “criminality” using biometric and/or criminal legal data.

The review outlines the specific ways crime prediction technology reproduces, naturalizes and amplifies discriminatory outcomes, and why exclusively technical criteria are insufficient for evaluating their risks. We learn that neither predication architectures nor machine learning programs are neutral, they often uncritically inherit, accept and incorporate dominant cultural and belief systems, which are then normalised. For example, “Predictions” based on finding correlations between facial features and criminality are accepted as valid, interpreted as the product of intelligent and “objective” technical assessments. Furthermore, the data from predictive outcomes and recommendations are fed back into the system, thereby reproducing and confirming biased correlations. The consequence of this feedback loop, especially in facial recognition architectures, combined with a belief in “evidence based” diagnosis, is that it leads to ‘widespread mischaracterizations of criminal justice data’ that ‘justifies the exclusion and repression of marginalized populations through the construction of “risky” or “deviant” profiles’…(More).

Four Principles for Integrating AI & Good Governance


Oxford Commission on AI and Good Governance: “Many governments, public agencies and institutions already employ AI in providing public services, the distribution of resources and the delivery of governance goods. In the public sector, AI-enabled governance may afford new efficiencies that have the potential to transform a wide array of public service tasks.
But short-sighted design and use of AI can create new problems, entrench existing inequalities, and calcify and ultimately undermine government organizations.

Frameworks for the procurement and implementation of AI in public service have widely remained undeveloped. Frequently, existing regulations and national laws are no longer fit for purpose to ensure
good behaviour (of either AI or private suppliers) and are ill-equipped to provide guidance on the democratic use of AI.
As technology evolves rapidly, we need rules to guide the use of AI in ways that safeguard democratic values. Under what conditions can AI be put into service for good governance?

We offer a framework for integrating AI with good governance. We believe that with dedicated attention and evidence-based policy research, it should be possible to overcome the combined technical and organizational challenges of successfully integrating AI with good governance. Doing so requires working towards:


Inclusive Design: issues around discrimination and bias of AI in relation to inadequate data sets, exclusion of minorities and under-represented
groups, and the lack of diversity in design.
Informed Procurement: issues around the acquisition and development in relation to due diligence, design and usability specifications and the assessment of risks and benefits.
Purposeful Implementation: issues around the use of AI in relation to interoperability, training needs for public servants, and integration with decision-making processes.
Persistent Accountability: issues around the accountability and transparency of AI in relation to ‘black box’ algorithms, the interpretability and explainability of systems, monitoring and auditing…(More)”

Indigenous Protocol and Artificial Intelligence


Indigenous Protocol and Artificial Intelligence Working Group: “This position paper on Indigenous Protocol (IP) and Artificial Intelligence (AI) is a starting place for those who want to design and create AI from an ethical position that centers Indigenous concerns. Each Indigenous community will have its own particular approach to the questions we raise in what follows. What we have written here is not a substitute for establishing and maintaining relationships of reciprocal care and support with specific Indigenous communities. Rather, this document offers a range of ideas to take into consideration when entering into conversations which prioritize Indigenous perspectives in the development of artificial intelligence.

The position paper is an attempt to capture multiple layers of a discussion that happened over 20 months, across 20 time zones, during two workshops, and between Indigenous people (and a few non-Indigenous folks) from diverse communities in Aotearoa, Australia, North America, and the Pacific.

Our aim, however, is not to provide a unified voice. Indigenous ways of knowing are rooted in distinct, sovereign territories across the planet. These extremely diverse landscapes and histories have influenced different communities and their discrete cultural protocols over time. A single ‘Indigenous perspective’ does not exist, as epistemologies are motivated and shaped by the grounding of specific communities in particular territories. Historically, scholarly traditions that homogenize diverse Indigenous cultural practices have resulted in ontological and epistemological violence, and a flattening of the rich texture and variability of Indigenous thought….(More)”.

Turning Point Policymaking in the Era of Artificial Intelligence


Book by Darrell M. West and John R. Allen: “Until recently, “artificial intelligence” sounded like something out of science fiction. But the technology of artificial intelligence, AI, is becoming increasingly common, from self-driving cars to e-commerce algorithms that seem to know what you want to buy before you do. Throughout the economy and many aspects of daily life, artificial intelligence has become the transformative technology of our time.

Despite its current and potential benefits, AI is little understood by the larger public and widely feared. The rapid growth of artificial intelligence has given rise to concerns that hidden technology will create a dystopian world of increased income inequality, a total lack of privacy, and perhaps a broad threat to humanity itself.

In their compelling and readable book, two experts at Brookings discuss both the opportunities and risks posed by artificial intelligence—and how near-term policy decisions could determine whether the technology leads to utopia or dystopia.

Drawing on in-depth studies of major uses of AI, the authors detail how the technology actually works. They outline a policy and governance blueprint for gaining the benefits of artificial intelligence while minimizing its potential downsides.

The book offers major recommendations for actions that governments, businesses, and individuals can take to promote trustworthy and responsible artificial intelligence. Their recommendations include: creation of ethical principles, strengthening government oversight, defining corporate culpability, establishment of advisory boards at federal agencies, using third-party audits to reduce biases inherent in algorithms, tightening personal privacy requirements, using insurance to mitigate exposure to AI risks, broadening decision-making about AI uses and procedures, penalizing malicious uses of new technologies, and taking pro-active steps to address how artificial intelligence affects the workforce….(More)”.

Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research


Report by the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence:” The aim of this report is to offer a broad roadmap for work on the ethical and societal implications of algorithms, data, and AI (ADA) in the coming years. It is aimed at those involved in planning, funding, and pursuing research and policy work related to these technologies. We use the term ‘ADA-based technologies’ to capture a broad range of ethically and societally relevant technologies based on algorithms, data, and AI, recognising that these three concepts are not totally separable from one another and will often overlap. A shared set of key concepts and concerns is emerging, with widespread agreement on some of the core issues (such as bias) and values (such as fairness) that an ethics of algorithms, data, and AI should focus on. Over the last two years, these have begun to be codified in various codes and sets of ‘principles’. Agreeing on these issues, values and high-level principles is an important step for ensuring that ADA-based technologies are developed and used for the benefit of society. However, we see three main gaps in this existing work: (i) a lack of clarity or consensus around the meaning of central ethical concepts and how they apply in specific situations; (ii) insufficient attention given to tensions between ideals and values; (iii) insufficient evidence on both (a) key technological capabilities and impacts, and (b) the perspectives of different publics.”….(More)”.

New mathematical idea reins in AI bias towards making unethical and costly commercial choices


The University of Warwick: “Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.

Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.

The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.

So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.

Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle“, published in Royal Society Open Science on Wednesday 1st July 2020….(More)”.

Regulating Electronic Means to Fight the Spread of COVID-19


In Custodia Legis Library of Congress: “It appears that COVID-19 will not go away any time soon. As there is currently no known cure or vaccine against it, countries have to find other ways to prevent and mitigate the spread of this infectious disease. Many countries have turned to electronic measures to provide general information and advice on COVID-19, allow people to check symptoms, trace contacts and alert people who have been in proximity to an infected person, identify “hot spots,” and track compliance with confinement measures and stay-at-home orders.

The Global Legal Research Directorate (GLRD) of the Law Library of Congress recently completed research on the kind of electronic measures countries around the globe are employing to fight the spread of COVID-19 and their potential privacy and data protection implications. We are excited to share with you the report that resulted from this research, Regulating Electronic Means to Fight the Spread of COVID-19. The report covers 23 selected jurisdictions, namely ArgentinaAustraliaBrazilChinaEnglandFranceIcelandIndiaIranIsraelItalyJapanMexicoNorwayPortugalthe Russian FederationSouth AfricaSouth KoreaSpainTaiwanTurkeythe United Arab Emirates, and the European Union (EU).

The surveys found that dedicated coronavirus apps that are downloaded to an individual’s mobile phone (particularly contact tracing apps), the use of anonymized mobility data, and creating electronic databases were the most common electronic measures. Whereas the EU recommends the use of voluntary apps because of the “high degree of intrusiveness” of mandatory apps, some countries take a different approach and require installing an app for people who enter the country from abroad, people who return to work, or people who are ordered to quarantine.

However, these electronic measures also raise privacy and data protection concerns, in particular as they relate to sensitive health data. The surveys discuss the different approaches countries have taken to ensure compliance with privacy and data protection regulations, such as conducting rights impact assessments before the measures were deployed or having data protection agencies conduct an assessment after deployment.

The map below shows which jurisdictions have adopted COVID-19 contact tracing apps and the technologies they use.

Map shows COVID-19 contact tracing apps in selected jurisdictions. Created by Susan Taylor, Law Library of Congress, based on surveys in “Regulating Electronic Means to Fight the Spread of COVID-19” (Law Library of Congress, June 2020). This map does not cover other COVID-19 apps that use GPS/geolocation….(More)”.