AI+1: Shaping Our Integrated Future


Report edited by the Rockefeller Foundation: “As we speak—and browse, and post photos, and move about—artificial intelligence is transforming the fabric of our lives. It is making life easier, better informed, healthier, more convenient. It also threatens to crimp our freedoms, worsen social disparities, and gives inordinate powers to unseen forces.

Both AI’s virtues and risks have been on vivid display during this moment of global turmoil, forcing a deeper conversation around its responsible use and, more importantly, the rules and regulations needed to harness its power for good.

This is a vastly complex subject, with no easy conclusions. With no roadmap, however, we risk creating more problems instead of solving meaningful ones.

Last fall The Rockefeller Foundation convened a unique group of thinkers and doers at its Bellagio Center in Italy to weigh one of the great challenges of our time: How to harness the powers of machine learning for social good and minimize its harms. The resulting AI + 1 report includes diverse perspectives from top technologists, philosophers, economists, and artists at a critical moment during the current Covid-19 pandemic.

The report’s authors present a mix of skepticism and hope centered on three themes:

  1. AI is more than a technology. It reflects the values in its system, suggesting that any ethical lapses simply mirror our own deficiencies. And yet, there’s hope: AI can also inspire us, augment us, and make us go deeper.
  2. AI’s goals need to be society’s goals. As opposed to the market-driven, profit-making ones that dominate its use today, applying AI responsibly is to use it to support systems that have human goals.
  3. We need a new rule-making system to guide its responsible development. Self-regulation simply isn’t enough. Cross-sector oversight must start with transparency and access to meaningful information, as well as an ability to expose harm.

AI itself is a slippery force, hard to pin down and define, much less regulate. We describe it using imprecise metaphors and deepen our understanding of it through nuanced conversation. This collection of essays provokes the kind of thoughtful consideration that will help us wrestle with AI’s complexity, develop a common language, create bridges between sectors and communities, and build practical solutions. We hope that you join us….(More)”.

The AI Powered State: What can we learn from China’s approach to public sector innovation?


Essay collection edited by Nesta: “China is striding ahead of the rest of the world in terms of its investment in artificial intelligence (AI), rate of experimentation and adoption, and breadth of applications. In 2017, China announced its aim of becoming the world leader in AI technology by 2030. AI innovation is now a key national priority, with central and local government spending on AI estimated to be in the tens of billions of dollars.

While Europe and the US are also following AI strategies designed to transform the public sector, there has been surprisingly little analysis of what practical lessons can be learnt from China’s use of AI in public services. Given China’s rapid progress in this area, it is important for the rest of the world to pay attention to developments in China if it wants to keep pace.

This essay collection finds that examining China’s experience of public sector innovation offers valuable insights for policymakers. Not everything is applicable to a western context – there are social, political and ethical concerns that arise from China’s use of new technologies in public services and governance – but there is still much that can be learned from its experience while also acknowledging what should be criticized and avoided….(More)”.

Opportunities of Artificial Intelligence


Report for the European Parliament: “A vast range of AI applications are being implemented by European industry, which can be broadly grouped into two categories: i) applications that enhance the performance and efficiency of processes through mechanisms such as intelligent monitoring, optimisation and control; and ii) applications that enhance human-machine collaboration.

At present, such applications are being implemented across a broad range of European industrial sectors. However, some sectors (e.g. automotive, telecommunications, healthcare) are more advanced in AI deployment than others (e.g. paper and pulp, pumps, chemicals). The types of AI applications
implemented also differ across industries. In less digitally mature sectors, clear barriers to adoption have been identified, including both internal (e.g. cultural resistance, lack of skills, financial considerations) and external (e.g. lack of venture capital) barriers. For the most part, and especially for SMEs, barriers to the adoption of AI are similar to those hindering digitalisation. The adoption of such AI applications is anticipated to deliver a wide range of positive impacts, for individual firms, across value chains, as well as at the societal and macroeconomic levels. AI applications can bring efficiency, environmental and economic benefits related to increased production output and quality, reduced maintenance costs, improved energy efficiency, better use of raw materials and reduced waste. In addition, AI applications can add value through product personalisation, improve customer service and contribute to the development of new product classes, business models and even sectors. Workforce benefits (e.g. improved workplace safety) are also being delivered by AI applications.

Alongside these firm-level benefits and opportunities, significant positive societal and economy-wide impacts are envisaged. More specifically, substantial increases in productivity, innovation, growth and job creation have been forecasted. For example, one estimate anticipates labour productivity increases of 11-37% by 2035. In addition, AI is expected to positively contribute to the UN Sustainable Development Goals and the capabilities of AI and machine learning to address major health challenges, such as the current COVID-19 health pandemic, are also noteworthy. For instance, AI systems have the potential to accelerate the lead times for the development of vaccines and drugs.

However, AI adoption brings a range of challenges…(More)”.

Wrongfully Accused by an Algorithm


Kashmir Hill at the New York Times: “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit….

The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police, according to their report.

Five months later, in March 2019, Jennifer Coulson, a digital image examiner for the Michigan State Police, uploaded a “probe image” — a still from the video, showing the man in the Cardinals cap — to the state’s facial recognition database. The system would have mapped the man’s face and searched for similar ones in a collection of 49 million photos.

The state’s technology is supplied for $5.5 million by a company called DataWorks Plus. Founded in South Carolina in 2000, the company first offered mug shot management software, said Todd Pastorini, a general manager. In 2005, the firm began to expand the product, adding face recognition tools developed by outside vendors.

When one of these subcontractors develops an algorithm for recognizing faces, DataWorks attempts to judge its effectiveness by running searches using low-quality images of individuals it knows are present in a system. “We’ve tested a lot of garbage out there,” Mr. Pastorini said. These checks, he added, are not “scientific” — DataWorks does not formally measure the systems’ accuracy or bias.

“We’ve become a pseudo-expert in the technology,” Mr. Pastorini said.

In Michigan, the DataWorks software used by the state police incorporates components developed by the Japanese tech giant NEC and by Rank One Computing, based in Colorado, according to Mr. Pastorini and a state police spokeswoman. In 2019, algorithms from both companies were included in a federal study of over 100 facial recognition systems that found they were biased, falsely identifying African-American and Asian faces 10 times to 100 times more than Caucasian faces….(More)“.

Panopticon Reborn: Social Credit as Regulation for the Age of AI


Paper by Kevin Werbach: “Technology scholars, policy-makers, and executives in Europe and the United States disagree violently about what the digitally connected world should look like. They agree on what it shouldn’t: the Orwellian panopticon of China’s Social Credit System (SCS). SCS is a government-led initiative to promote data-driven compliance with law and social values, using databases, analytics, blacklists, and software applications. In the West, it is widely viewed as a diabolical effort to crush any spark of resistance to the dictates of the Chinese Communist Party (CCP) and its corporate emissaries. This picture is, if not wholly incorrect, decidedly incomplete. SCS is the world’s most advanced prototype of a regime of algorithmic regulation. It is a sophisticated and comprehensive effort not only to expand algorithmic control, but also to restrain it. Understanding China’s system is crucial for resolving the great challenges we face in the emerging era of relentless data aggregation, ubiquitous analytics, and algorithmic control….(More)”.

The Bigot in the Machine: Bias in Algorithmic Systems


Article by Barbara Fister: “We are living in an “age of algorithms.” Vast quantities of information are collected, sorted, shared, combined, and acted on by proprietary black boxes. These systems use machine learning to build models and make predictions from data sets that may be out of date, incomplete, and biased. We will explore the ways bias creeps into information systems, take a look at how “big data,” artificial intelligence and machine learning often amplify bias unwittingly, and consider how these systems can be deliberately exploited by actors for whom bias is a feature, not a bug. Finally, we’ll discuss ways we can work with our communities to create a more fair and just information environment….(More)”.

Scraping Court Records Data to Find Dirty Cops


Article by Lawsuit.org: “In the 2002 dystopian sci-fi film “Minority Report,” law enforcement can manage crime by “predicting” illegal behavior before it happens. While fiction, the plot is intriguing and contributes to the conversation on advanced crime-fighting technology. However, today’s world may not be far off.

Data’s role in our lives and more accessibility to artificial intelligence is changing the way we approach topics such as research, real estate, and law enforcement. In fact, recent investigative reporting has shown that “dozens of [American] cities” are now experimenting with predictive policing technology.

Despite the current controversy surrounding predictive policing, it seems to be a growing trend that has been met with little real resistance. We may be closer to policing that mirrors the frightening depictions in “Minority Report” than we ever thought possible. 

Fighting Fire With Fire

In its current state, predictive policing is defined as:

“The usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators’ identities, and methods for predicting victims of crime.”

While it might not be possible to prevent predictive policing from being employed by the criminal justice system, perhaps there are ways we can create a more level playing field: One where the powers of big data analysis aren’t just used to predict crime, but also are used to police law enforcement themselves.

Below, we’ve provided a detailed breakdown of what this potential reality could look like when applied to one South Florida county’s public databases, along with information on how citizens and communities can use public data to better understand the behaviors of local law enforcement and even individual police officers….(More)”.

IBM quits facial recognition, joins call for police reforms


AP Article by Matt O’Brien: “IBM is getting out of the facial recognition business, saying it’s concerned about how the technology can be used for mass surveillance and racial profiling.

Ongoing protests responding to the death of George Floyd have sparked a broader reckoning over racial injustice and a closer look at the use of police technology to track demonstrators and monitor American neighborhoods.

IBM is one of several big tech firms that had earlier sought to improve the accuracy of their face-scanning software after research found racial and gender disparities. But its new CEO is now questioning whether it should be used by police at all.

“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” wrote CEO Arvind Krishna in a letter sent Monday to U.S. lawmakers.

IBM’s decision to stop building and selling facial recognition software is unlikely to affect its bottom line, since the tech giant is increasingly focused on cloud computing while an array of lesser-known firms have cornered the market for government facial recognition contracts.

“But the symbolic nature of this is important,” said Mutale Nkonde, a research fellow at Harvard and Stanford universities who directs the nonprofit AI For the People.

Nkonde said IBM shutting down a business “under the guise of advancing anti-racist business practices” shows that it can be done and makes it “socially unacceptable for companies who tweet Black Lives Matter to do so while contracting with the police.”…(More)”.

AI Procurement in a Box


Toolbox by the World Economic Forum: “AI Procurement in a Box is a practical guide that helps governments rethink the procurement of artificial intelligence (AI) with a focus on innovation, efficiency and ethics. Developing a new approach to the acquisition of emerging technologies such as AI will not only accelerate the adoption of AI in the administration, but also drive the development of ethical standards in AI development and deployment. Innovative procurement approaches have the potential to foster innovation, create competitive markets for AI systems and uphold public trust in the public-sector adoption of AI.

AI has the potential to vastly improve government operations and meet the needs of citizens in new ways, ranging from intelligently automating administrative processes to generating insights for public policy developments and improving public service delivery, for example, through personalized healthcare. Many public institutions are lagging behind in harnessing this powerful technology because of challenges related to data, skills and ethical deployment.

Public procurement can be an important driver of government adoption of AI. This means not only ensuring that AI-driven technologies offering the best value for money are purchased, but also driving the ethical development and deployment of innovative AI systems….(More)”.

Using Algorithms to Address Trade-Offs Inherent in Predicting Recidivism


Paper by Jennifer L. Skeem and Christopher Lowenkamp: “Although risk assessment has increasingly been used as a tool to help reform the criminal justice system, some stakeholders are adamantly opposed to using algorithms. The principal concern is that any benefits achieved by safely reducing rates of incarceration will be offset by costs to racial justice claimed to be inherent in the algorithms themselves. But fairness tradeoffs are inherent to the task of predicting recidivism, whether the prediction is made by an algorithm or human.

Based on a matched sample of 67,784 Black and White federal supervisees assessed with the Post Conviction Risk Assessment (PCRA), we compare how three alternative strategies for “debiasing” algorithms affect these tradeoffs, using arrest for a violent crime as the criterion. These candidate algorithms all strongly predict violent re-offending (AUCs=.71-72), but vary in their association with race (r= .00-.21) and shift tradeoffs between balance in positive predictive value and false positive rates. Providing algorithms with access to race (rather than omitting race or ‘blinding’ its effects) can maximize calibration and minimize imbalanced error rates. Implications for policymakers with value preferences for efficiency vs. equity are discussed…(More)”.