Seeing Like a Finite State Machine


Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”

Principles alone cannot guarantee ethical AI


Paper by Brent Mittelstadt: “Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement….(More)”.

Mayor de Blasio Signs Executive Order to Establish Algorithms Management and Policy Officer


Press release: “Mayor Bill de Blasio today signed an Executive Order to establish an Algorithms Management and Policy Officer within the Mayor’s Office of Operations. The Officer will serve as a centralized resource on algorithm policy and develop guidelines and best practices to assist City agencies in their use of algorithms to make decisions. The new Officer will ensure relevant algorithms used by the City to deliver services promote equity, fairness and accountability. The creation of the position follows review of the recommendations from the Automated Decision Systems (ADS) Task Force Report required by Local Law 49 of 2018, published here.

“Fairness and equity are central to improving the lives of New Yorkers,” said Mayor Bill de Blasio.“With every new technology comes added responsibility, and I look forward to welcoming an Algorithms Management and Policy Officer to my team to ensure the tools we use to make decisions are fair and transparent.”…

The Algorithms Management and Policy Officer will develop guidelines and best practices to assist City agencies in their use of tools or systems that rely on algorithms and related technologies to support decision-making. As part of that effort, the Officer and their personnel support will develop processes for agency reporting and provide resources that will help the public learn more about how New York City government uses algorithms to make decisions and deliver services….(More)”.

AI For Good Is Often Bad


Mark Latonero at Wired: “….Within the last few years, a number of tech companies, from Google to Huawei, have launched their own programs under the AI for Good banner. They deploy technologies like machine-learning algorithms to address critical issues like crime, poverty, hunger, and disease. In May, French president Emmanuel Macron invited about 60 leaders of AI-driven companies, like Facebook’s Mark Zuckerberg, to a Tech for Good Summit in Paris. The same month, the United Nations in Geneva hosted its third annual AI for Global Good Summit sponsored by XPrize. (Disclosure: I have spoken at it twice.) A recent McKinsey report on AI for Social Good provides an analysis of 160 current cases claiming to use AI to address the world’s most pressing and intractable problems.

While AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need. In fact, the fanfare around these projects smacks of tech solutionism, which can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards.

Tech companies that set out to develop a tool for the common good, not only their self-interest, soon face a dilemma: They lack the expertise in the intractable social and humanitarian issues facing much of the world. That’s why companies like Intel have partnered with National Geographic and the Leonardo DiCaprio Foundation on wildlife trafficking. And why Facebook partnered with the Red Cross to find missing people after disasters. IBM’s social-good program alone boasts 19 partnerships with NGOs and government agencies. Partnerships are smart. The last thing society needs is for engineers in enclaves like Silicon Valley to deploy AI tools for global problems they know little about….(More)”.

Decision-making in the Age of the Algorithm


Paper by Thea Snow: “Frontline practitioners in the public sector – from social workers to police to custody officers – make important decisions every day about people’s lives. Operating in the context of a sector grappling with how to manage rising demand, coupled with diminishing resources, frontline practitioners are being asked to make very important decisions quickly and with limited information. To do this, public sector organisations are turning to new technologies to support decision-making, in particular, predictive analytics tools, which use machine learning algorithms to discover patterns in data and make predictions.

While many guides exist around ethical AI design, there is little guidance on how to support a productive human-machine interaction in relation to AI. This report aims to fill this gap by focusing on the issue of human-machine interaction. How people are working with tools is significant because, simply put, for predictive analytics tools to be effective, frontline practitioners need to use them well. It encourages public sector organisations to think about how people feel about predictive analytics tools – what they’re fearful of, what they’re excited about, what they don’t understand.

Based on insights drawn from an extensive literature review, interviews with frontline practitioners, and discussions with experts across a range of fields, the guide also identifies three key principles that play a significant role in supporting a constructive human-machine relationship: context, understanding, and agency….(More)”.

The Ethical Algorithm: The Science of Socially Aware Algorithm Design


Book by Michael Kearns and Aaron Roth: “Over the course of a generation, algorithms have gone from mathematical abstractions to powerful mediators of daily life. Algorithms have made our lives more efficient, more entertaining, and, sometimes, better informed. At the same time, complex algorithms are increasingly violating the basic rights of individual citizens. Allegedly anonymized datasets routinely leak our most sensitive personal information; statistical models for everything from mortgages to college admissions reflect racial and gender bias. Meanwhile, users manipulate algorithms to “game” search engines, spam filters, online reviewing services, and navigation apps.

Understanding and improving the science behind the algorithms that run our lives is rapidly becoming one of the most pressing issues of this century. Traditional fixes, such as laws, regulations and watchdog groups, have proven woefully inadequate. Reporting from the cutting edge of scientific research, The Ethical Algorithm offers a new approach: a set of principled solutions based on the emerging and exciting science of socially aware algorithm design. Michael Kearns and Aaron Roth explain how we can better embed human principles into machine code – without halting the advance of data-driven scientific exploration. Weaving together innovative research with stories of citizens, scientists, and activists on the front lines, The Ethical Algorithm offers a compelling vision for a future, one in which we can better protect humans from the unintended impacts of algorithms while continuing to inspire wondrous advances in technology….(More)”.

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Artificial intelligence: From expert-only to everywhere


Deloitte: “…AI consists of multiple technologies. At its foundation are machine learning and its more complex offspring, deep-learning neural networks. These technologies animate AI applications such as computer vision, natural language processing, and the ability to harness huge troves of data to make accurate predictions and to unearth hidden insights (see sidebar, “The parlance of AI technologies”). The recent excitement around AI stems from advances in machine learning and deep-learning neural networks—and the myriad ways these technologies can help companies improve their operations, develop new offerings, and provide better customer service at a lower cost.

The trouble with AI, however, is that to date, many companies have lacked the expertise and resources to take full advantage of it. Machine learning and deep learning typically require teams of AI experts, access to large data sets, and specialized infrastructure and processing power. Companies that can bring these assets to bear then need to find the right use cases for applying AI, create customized solutions, and scale them throughout the company. All of this requires a level of investment and sophistication that takes time to develop, and is out of reach for many….

These tech giants are using AI to create billion-dollar services and to transform their operations. To develop their AI services, they’re following a familiar playbook: (1) find a solution to an internal challenge or opportunity; (2) perfect the solution at scale within the company; and (3) launch a service that quickly attracts mass adoption. Hence, we see Amazon, Google, Microsoft, and China’s BATs launching AI development platforms and stand-alone applications to the wider market based on their own experience using them.

Joining them are big enterprise software companies that are integrating AI capabilities into cloud-based enterprise software and bringing them to the mass market. Salesforce, for instance, integrated its AI-enabled business intelligence tool, Einstein, into its CRM software in September 2016; the company claims to deliver 1 billion predictions per day to users. SAP integrated AI into its cloud-based ERP system, S4/HANA, to support specific business processes such as sales, finance, procurement, and the supply chain. S4/HANA has around 8,000 enterprise users, and SAP is driving its adoption by announcing that the company will not support legacy SAP ERP systems past 2025.

A host of startups is also sprinting into this market with cloud-based development tools and applications. These startups include at least six AI “unicorns,” two of which are based in China. Some of these companies target a specific industry or use case. For example, Crowdstrike, a US-based AI unicorn, focuses on cybersecurity, while Benevolent.ai uses AI to improve drug discovery.

The upshot is that these innovators are making it easier for more companies to benefit from AI technology even if they lack top technical talent, access to huge data sets, and their own massive computing power. Through the cloud, they can access services that address these shortfalls—without having to make big upfront investments. In short, the cloud is democratizing access to AI by giving companies the ability to use it now….(More)”.

Could AI Drive Transformative Social Progress? What Would This Require?


Paper by Edward (Ted) A. Parson et al: “In contrast to popular dystopian speculation about the societal impacts of widespread AI deployment, we consider AI’s potential to drive a social transformation toward greater human liberty, agency, and equality. The impact of AI, like all technology, will depend on both properties of the technology and the economic, social, and political conditions of its deployment and use. We identify conditions of each type – technical characteristics and socio-political context – likely to be conducive to such large-scale beneficial impacts.

Promising technical characteristics include decision-making structures that are tentative and pluralistic, rather than optimizing a single-valued objective function under a single characterization of world conditions; and configuring the decision-making of AI-enabled products and services exclusively to advance the interests of their users, subject to relevant social values, not those of their developers or vendors. We explore various strategies and business models for developing and deploying AI-enabled products that incorporate these characteristics, including philanthropic seed capital, crowd-sourcing, open-source development, and sketch various possible ways to scale deployment thereafter….(More)”.

Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling


Paper by Michele Samorani et al: “Machine learning is often employed in appointment scheduling to identify the patients with the greatest no-show risk, so as to schedule them into overbooked slots, and thereby maximize the clinic performance, as measured by a weighted sum of all patients’ waiting time and the provider’s overtime and idle time. However, if the patients with the greatest no-show risk belong to the same demographic group, then that demographic group will be scheduled in overbooked slots disproportionately to the general population. This is problematic because patients scheduled in those slots tend to have a worse service experience than the other patients, as measured by the time they spend in the waiting room. Such negative experience may decrease patient’s engagement and, in turn, further increase no-shows. Motivated by the real-world case of a large specialty clinic whose black patients have a higher no-show probability than non-black patients, we demonstrate that combining machine learning with scheduling optimization causes racial disparity in terms of patient waiting time. Our solution to eliminate this disparity while maintaining the benefits derived from machine learning consists of explicitly including the objective of minimizing racial disparity. We validate our solution method both on simulated data and real-world data, and find that racial disparity can be completely eliminated with no significant increase in scheduling cost when compared to the traditional predictive overbooking framework….(More)”.