New mathematical idea reins in AI bias towards making unethical and costly commercial choices


The University of Warwick: “Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.

Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.

The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.

So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.

Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle“, published in Royal Society Open Science on Wednesday 1st July 2020….(More)”.

Regulating Electronic Means to Fight the Spread of COVID-19


In Custodia Legis Library of Congress: “It appears that COVID-19 will not go away any time soon. As there is currently no known cure or vaccine against it, countries have to find other ways to prevent and mitigate the spread of this infectious disease. Many countries have turned to electronic measures to provide general information and advice on COVID-19, allow people to check symptoms, trace contacts and alert people who have been in proximity to an infected person, identify “hot spots,” and track compliance with confinement measures and stay-at-home orders.

The Global Legal Research Directorate (GLRD) of the Law Library of Congress recently completed research on the kind of electronic measures countries around the globe are employing to fight the spread of COVID-19 and their potential privacy and data protection implications. We are excited to share with you the report that resulted from this research, Regulating Electronic Means to Fight the Spread of COVID-19. The report covers 23 selected jurisdictions, namely ArgentinaAustraliaBrazilChinaEnglandFranceIcelandIndiaIranIsraelItalyJapanMexicoNorwayPortugalthe Russian FederationSouth AfricaSouth KoreaSpainTaiwanTurkeythe United Arab Emirates, and the European Union (EU).

The surveys found that dedicated coronavirus apps that are downloaded to an individual’s mobile phone (particularly contact tracing apps), the use of anonymized mobility data, and creating electronic databases were the most common electronic measures. Whereas the EU recommends the use of voluntary apps because of the “high degree of intrusiveness” of mandatory apps, some countries take a different approach and require installing an app for people who enter the country from abroad, people who return to work, or people who are ordered to quarantine.

However, these electronic measures also raise privacy and data protection concerns, in particular as they relate to sensitive health data. The surveys discuss the different approaches countries have taken to ensure compliance with privacy and data protection regulations, such as conducting rights impact assessments before the measures were deployed or having data protection agencies conduct an assessment after deployment.

The map below shows which jurisdictions have adopted COVID-19 contact tracing apps and the technologies they use.

Map shows COVID-19 contact tracing apps in selected jurisdictions. Created by Susan Taylor, Law Library of Congress, based on surveys in “Regulating Electronic Means to Fight the Spread of COVID-19” (Law Library of Congress, June 2020). This map does not cover other COVID-19 apps that use GPS/geolocation….(More)”.

AI+1: Shaping Our Integrated Future


Report edited by the Rockefeller Foundation: “As we speak—and browse, and post photos, and move about—artificial intelligence is transforming the fabric of our lives. It is making life easier, better informed, healthier, more convenient. It also threatens to crimp our freedoms, worsen social disparities, and gives inordinate powers to unseen forces.

Both AI’s virtues and risks have been on vivid display during this moment of global turmoil, forcing a deeper conversation around its responsible use and, more importantly, the rules and regulations needed to harness its power for good.

This is a vastly complex subject, with no easy conclusions. With no roadmap, however, we risk creating more problems instead of solving meaningful ones.

Last fall The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.

The report’s authors present a mix of skepticism and hope centered on three themes:

  1. AI is more than a technology. It reflects the values in its system, suggesting that any ethical lapses simply mirror our own deficiencies. And yet, there’s hope: AI can also inspire us, augment us, and make us go deeper.
  2. AI’s goals need to be society’s goals. As opposed to the market-driven, profit-making ones that dominate its use today, applying AI responsibly is to use it to support systems that have human goals.
  3. We need a new rule-making system to guide its responsible development. Self-regulation simply isn’t enough. Cross-sector oversight must start with transparency and access to meaningful information, as well as an ability to expose harm.

AI itself is a slippery force, hard to pin down and define, much less regulate. We describe it using imprecise metaphors and deepen our understanding of it through nuanced conversation. This collection of essays provokes the kind of thoughtful consideration that will help us wrestle with AI’s complexity, develop a common language, create bridges between sectors and communities, and build practical solutions. We hope that you join us….(More)”.

The AI Powered State: What can we learn from China’s approach to public sector innovation?


Essay collection edited by Nesta: “China is striding ahead of the rest of the world in terms of its investment in artificial intelligence (AI), rate of experimentation and adoption, and breadth of applications. In 2017, China announced its aim of becoming the world leader in AI technology by 2030. AI innovation is now a key national priority, with central and local government spending on AI estimated to be in the tens of billions of dollars.

While Europe and the US are also following AI strategies designed to transform the public sector, there has been surprisingly little analysis of what practical lessons can be learnt from China’s use of AI in public services. Given China’s rapid progress in this area, it is important for the rest of the world to pay attention to developments in China if it wants to keep pace.

This essay collection finds that examining China’s experience of public sector innovation offers valuable insights for policymakers. Not everything is applicable to a western context – there are social, political and ethical concerns that arise from China’s use of new technologies in public services and governance – but there is still much that can be learned from its experience while also acknowledging what should be criticized and avoided….(More)”.

Opportunities of Artificial Intelligence


Report for the European Parliament: “A vast range of AI applications are being implemented by European industry, which can be broadly grouped into two categories: i) applications that enhance the performance and efficiency of processes through mechanisms such as intelligent monitoring, optimisation and control; and ii) applications that enhance human-machine collaboration.

At present, such applications are being implemented across a broad range of European industrial sectors. However, some sectors (e.g. automotive, telecommunications, healthcare) are more advanced in AI deployment than others (e.g. paper and pulp, pumps, chemicals). The types of AI applications
implemented also differ across industries. In less digitally mature sectors, clear barriers to adoption have been identified, including both internal (e.g. cultural resistance, lack of skills, financial considerations) and external (e.g. lack of venture capital) barriers. For the most part, and especially for SMEs, barriers to the adoption of AI are similar to those hindering digitalisation. The adoption of such AI applications is anticipated to deliver a wide range of positive impacts, for individual firms, across value chains, as well as at the societal and macroeconomic levels. AI applications can bring efficiency, environmental and economic benefits related to increased production output and quality, reduced maintenance costs, improved energy efficiency, better use of raw materials and reduced waste. In addition, AI applications can add value through product personalisation, improve customer service and contribute to the development of new product classes, business models and even sectors. Workforce benefits (e.g. improved workplace safety) are also being delivered by AI applications.

Alongside these firm-level benefits and opportunities, significant positive societal and economy-wide impacts are envisaged. More specifically, substantial increases in productivity, innovation, growth and job creation have been forecasted. For example, one estimate anticipates labour productivity increases of 11-37% by 2035. In addition, AI is expected to positively contribute to the UN Sustainable Development Goals and the capabilities of AI and machine learning to address major health challenges, such as the current COVID-19 health pandemic, are also noteworthy. For instance, AI systems have the potential to accelerate the lead times for the development of vaccines and drugs.

However, AI adoption brings a range of challenges…(More)”.

Wrongfully Accused by an Algorithm


Kashmir Hill at the New York Times: “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit….

The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police, according to their report.

Five months later, in March 2019, Jennifer Coulson, a digital image examiner for the Michigan State Police, uploaded a “probe image” — a still from the video, showing the man in the Cardinals cap — to the state’s facial recognition database. The system would have mapped the man’s face and searched for similar ones in a collection of 49 million photos.

The state’s technology is supplied for $5.5 million by a company called DataWorks Plus. Founded in South Carolina in 2000, the company first offered mug shot management software, said Todd Pastorini, a general manager. In 2005, the firm began to expand the product, adding face recognition tools developed by outside vendors.

When one of these subcontractors develops an algorithm for recognizing faces, DataWorks attempts to judge its effectiveness by running searches using low-quality images of individuals it knows are present in a system. “We’ve tested a lot of garbage out there,” Mr. Pastorini said. These checks, he added, are not “scientific” — DataWorks does not formally measure the systems’ accuracy or bias.

“We’ve become a pseudo-expert in the technology,” Mr. Pastorini said.

In Michigan, the DataWorks software used by the state police incorporates components developed by the Japanese tech giant NEC and by Rank One Computing, based in Colorado, according to Mr. Pastorini and a state police spokeswoman. In 2019, algorithms from both companies were included in a federal study of over 100 facial recognition systems that found they were biased, falsely identifying African-American and Asian faces 10 times to 100 times more than Caucasian faces….(More)“.

Panopticon Reborn: Social Credit as Regulation for the Age of AI


Paper by Kevin Werbach: “Technology scholars, policy-makers, and executives in Europe and the United States disagree violently about what the digitally connected world should look like. They agree on what it shouldn’t: the Orwellian panopticon of China’s Social Credit System (SCS). SCS is a government-led initiative to promote data-driven compliance with law and social values, using databases, analytics, blacklists, and software applications. In the West, it is widely viewed as a diabolical effort to crush any spark of resistance to the dictates of the Chinese Communist Party (CCP) and its corporate emissaries. This picture is, if not wholly incorrect, decidedly incomplete. SCS is the world’s most advanced prototype of a regime of algorithmic regulation. It is a sophisticated and comprehensive effort not only to expand algorithmic control, but also to restrain it. Understanding China’s system is crucial for resolving the great challenges we face in the emerging era of relentless data aggregation, ubiquitous analytics, and algorithmic control….(More)”.

The Bigot in the Machine: Bias in Algorithmic Systems


Article by Barbara Fister: “We are living in an “age of algorithms.” Vast quantities of information are collected, sorted, shared, combined, and acted on by proprietary black boxes. These systems use machine learning to build models and make predictions from data sets that may be out of date, incomplete, and biased. We will explore the ways bias creeps into information systems, take a look at how “big data,” artificial intelligence and machine learning often amplify bias unwittingly, and consider how these systems can be deliberately exploited by actors for whom bias is a feature, not a bug. Finally, we’ll discuss ways we can work with our communities to create a more fair and just information environment….(More)”.

Scraping Court Records Data to Find Dirty Cops


Article by Lawsuit.org: “In the 2002 dystopian sci-fi film “Minority Report,” law enforcement can manage crime by “predicting” illegal behavior before it happens. While fiction, the plot is intriguing and contributes to the conversation on advanced crime-fighting technology. However, today’s world may not be far off.

Data’s role in our lives and more accessibility to artificial intelligence is changing the way we approach topics such as research, real estate, and law enforcement. In fact, recent investigative reporting has shown that “dozens of [American] cities” are now experimenting with predictive policing technology.

Despite the current controversy surrounding predictive policing, it seems to be a growing trend that has been met with little real resistance. We may be closer to policing that mirrors the frightening depictions in “Minority Report” than we ever thought possible. 

Fighting Fire With Fire

In its current state, predictive policing is defined as:

“The usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators’ identities, and methods for predicting victims of crime.”

While it might not be possible to prevent predictive policing from being employed by the criminal justice system, perhaps there are ways we can create a more level playing field: One where the powers of big data analysis aren’t just used to predict crime, but also are used to police law enforcement themselves.

Below, we’ve provided a detailed breakdown of what this potential reality could look like when applied to one South Florida county’s public databases, along with information on how citizens and communities can use public data to better understand the behaviors of local law enforcement and even individual police officers….(More)”.

IBM quits facial recognition, joins call for police reforms


AP Article by Matt O’Brien: “IBM is getting out of the facial recognition business, saying it’s concerned about how the technology can be used for mass surveillance and racial profiling.

Ongoing protests responding to the death of George Floyd have sparked a broader reckoning over racial injustice and a closer look at the use of police technology to track demonstrators and monitor American neighborhoods.

IBM is one of several big tech firms that had earlier sought to improve the accuracy of their face-scanning software after research found racial and gender disparities. But its new CEO is now questioning whether it should be used by police at all.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote CEO Arvind Krishna in a letter sent Monday to U.S. lawmakers.

IBM’s decision to stop building and selling facial recognition software is unlikely to affect its bottom line, since the tech giant is increasingly focused on cloud computing while an array of lesser-known firms have cornered the market for government facial recognition contracts.

“But the symbolic nature of this is important,” said Mutale Nkonde, a research fellow at Harvard and Stanford universities who directs the nonprofit AI For the People.

Nkonde said IBM shutting down a business “under the guise of advancing anti-racist business practices” shows that it can be done and makes it “socially unacceptable for companies who tweet Black Lives Matter to do so while contracting with the police.”…(More)”.