The Ethical Algorithm: The Science of Socially Aware Algorithm Design


Book by Michael Kearns and Aaron Roth: “Over the course of a generation, algorithms have gone from mathematical abstractions to powerful mediators of daily life. Algorithms have made our lives more efficient, more entertaining, and, sometimes, better informed. At the same time, complex algorithms are increasingly violating the basic rights of individual citizens. Allegedly anonymized datasets routinely leak our most sensitive personal information; statistical models for everything from mortgages to college admissions reflect racial and gender bias. Meanwhile, users manipulate algorithms to “game” search engines, spam filters, online reviewing services, and navigation apps.

Understanding and improving the science behind the algorithms that run our lives is rapidly becoming one of the most pressing issues of this century. Traditional fixes, such as laws, regulations and watchdog groups, have proven woefully inadequate. Reporting from the cutting edge of scientific research, The Ethical Algorithm offers a new approach: a set of principled solutions based on the emerging and exciting science of socially aware algorithm design. Michael Kearns and Aaron Roth explain how we can better embed human principles into machine code – without halting the advance of data-driven scientific exploration. Weaving together innovative research with stories of citizens, scientists, and activists on the front lines, The Ethical Algorithm offers a compelling vision for a future, one in which we can better protect humans from the unintended impacts of algorithms while continuing to inspire wondrous advances in technology….(More)”.

We are finally getting better at predicting organized conflict


Tate Ryan-Mosley at MIT Technology Review: “People have been trying to predict conflict for hundreds, if not thousands, of years. But it’s hard, largely because scientists can’t agree on its nature or how it arises. The critical factor could be something as apparently innocuous as a booming population or a bad year for crops. Other times a spark ignites a powder keg, as with the assassination of Archduke Franz Ferdinand of Austria in the run-up to World War I.

Political scientists and mathematicians have come up with a slew of different methods for forecasting the next outbreak of violence—but no single model properly captures how conflict behaves. A study published in 2011 by the Peace Research Institute Oslo used a single model to run global conflict forecasts from 2010 to 2050. It estimated a less than .05% chance of violence in Syria. Humanitarian organizations, which could have been better prepared had the predictions been more accurate, were caught flat-footed by the outbreak of Syria’s civil war in March 2011. It has since displaced some 13 million people.

Bundling individual models to maximize their strengths and weed out weakness has resulted in big improvements. The first public ensemble model, the Early Warning Project, launched in 2013 to forecast new instances of mass killing. Run by researchers at the US Holocaust Museum and Dartmouth College, it claims 80% accuracy in its predictions.

Improvements in data gathering, translation, and machine learning have further advanced the field. A newer model called ViEWS, built by researchers at Uppsala University, provides a huge boost in granularity. Focusing on conflict in Africa, it offers monthly predictive readouts on multiple regions within a given state. Its threshold for violence is a single death.

Some researchers say there are private—and in some cases, classified—predictive models that are likely far better than anything public. Worries that making predictions public could undermine diplomacy or change the outcome of world events are not unfounded. But that is precisely the point. Public models are good enough to help direct aid to where it is needed and alert those most vulnerable to seek safety. Properly used, they could change things for the better, and save lives in the process….(More)”.

Artificial intelligence: From expert-only to everywhere


Deloitte: “…AI consists of multiple technologies. At its foundation are machine learning and its more complex offspring, deep-learning neural networks. These technologies animate AI applications such as computer vision, natural language processing, and the ability to harness huge troves of data to make accurate predictions and to unearth hidden insights (see sidebar, “The parlance of AI technologies”). The recent excitement around AI stems from advances in machine learning and deep-learning neural networks—and the myriad ways these technologies can help companies improve their operations, develop new offerings, and provide better customer service at a lower cost.

The trouble with AI, however, is that to date, many companies have lacked the expertise and resources to take full advantage of it. Machine learning and deep learning typically require teams of AI experts, access to large data sets, and specialized infrastructure and processing power. Companies that can bring these assets to bear then need to find the right use cases for applying AI, create customized solutions, and scale them throughout the company. All of this requires a level of investment and sophistication that takes time to develop, and is out of reach for many….

These tech giants are using AI to create billion-dollar services and to transform their operations. To develop their AI services, they’re following a familiar playbook: (1) find a solution to an internal challenge or opportunity; (2) perfect the solution at scale within the company; and (3) launch a service that quickly attracts mass adoption. Hence, we see Amazon, Google, Microsoft, and China’s BATs launching AI development platforms and stand-alone applications to the wider market based on their own experience using them.

Joining them are big enterprise software companies that are integrating AI capabilities into cloud-based enterprise software and bringing them to the mass market. Salesforce, for instance, integrated its AI-enabled business intelligence tool, Einstein, into its CRM software in September 2016; the company claims to deliver 1 billion predictions per day to users. SAP integrated AI into its cloud-based ERP system, S4/HANA, to support specific business processes such as sales, finance, procurement, and the supply chain. S4/HANA has around 8,000 enterprise users, and SAP is driving its adoption by announcing that the company will not support legacy SAP ERP systems past 2025.

A host of startups is also sprinting into this market with cloud-based development tools and applications. These startups include at least six AI “unicorns,” two of which are based in China. Some of these companies target a specific industry or use case. For example, Crowdstrike, a US-based AI unicorn, focuses on cybersecurity, while Benevolent.ai uses AI to improve drug discovery.

The upshot is that these innovators are making it easier for more companies to benefit from AI technology even if they lack top technical talent, access to huge data sets, and their own massive computing power. Through the cloud, they can access services that address these shortfalls—without having to make big upfront investments. In short, the cloud is democratizing access to AI by giving companies the ability to use it now….(More)”.

Could AI Drive Transformative Social Progress? What Would This Require?


Paper by Edward (Ted) A. Parson et al: “In contrast to popular dystopian speculation about the societal impacts of widespread AI deployment, we consider AI’s potential to drive a social transformation toward greater human liberty, agency, and equality. The impact of AI, like all technology, will depend on both properties of the technology and the economic, social, and political conditions of its deployment and use. We identify conditions of each type – technical characteristics and socio-political context – likely to be conducive to such large-scale beneficial impacts.

Promising technical characteristics include decision-making structures that are tentative and pluralistic, rather than optimizing a single-valued objective function under a single characterization of world conditions; and configuring the decision-making of AI-enabled products and services exclusively to advance the interests of their users, subject to relevant social values, not those of their developers or vendors. We explore various strategies and business models for developing and deploying AI-enabled products that incorporate these characteristics, including philanthropic seed capital, crowd-sourcing, open-source development, and sketch various possible ways to scale deployment thereafter….(More)”.

Overbooked and Overlooked: Machine Learning and Racial Bias in Medical Appointment Scheduling


Paper by Michele Samorani et al: “Machine learning is often employed in appointment scheduling to identify the patients with the greatest no-show risk, so as to schedule them into overbooked slots, and thereby maximize the clinic performance, as measured by a weighted sum of all patients’ waiting time and the provider’s overtime and idle time. However, if the patients with the greatest no-show risk belong to the same demographic group, then that demographic group will be scheduled in overbooked slots disproportionately to the general population. This is problematic because patients scheduled in those slots tend to have a worse service experience than the other patients, as measured by the time they spend in the waiting room. Such negative experience may decrease patient’s engagement and, in turn, further increase no-shows. Motivated by the real-world case of a large specialty clinic whose black patients have a higher no-show probability than non-black patients, we demonstrate that combining machine learning with scheduling optimization causes racial disparity in terms of patient waiting time. Our solution to eliminate this disparity while maintaining the benefits derived from machine learning consists of explicitly including the objective of minimizing racial disparity. We validate our solution method both on simulated data and real-world data, and find that racial disparity can be completely eliminated with no significant increase in scheduling cost when compared to the traditional predictive overbooking framework….(More)”.

Artificial Discretion as a Tool of Governance: A Framework for Understanding the Impact of Artificial Intelligence on Public Administration


Paper by Matthew M Young, Justin B Bullock, and Jesse D Lecy in Perspectives on Public Management and Governance: “Public administration research has documented a shift in the locus of discretion away from street-level bureaucrats to “systems-level bureaucracies” as a result of new information communication technologies that automate bureaucratic processes, and thus shape access to resources and decisions around enforcement and punishment. Advances in artificial intelligence (AI) are accelerating these trends, potentially altering discretion in public management in exciting and in challenging ways. We introduce the concept of “artificial discretion” as a theoretical framework to help public managers consider the impact of AI as they face decisions about whether and how to implement it. We operationalize discretion as the execution of tasks that require nontrivial decisions. Using Salamon’s tools of governance framework, we compare artificial discretion to human discretion as task specificity and environmental complexity vary. We evaluate artificial discretion with the criteria of effectiveness, efficiency, equity, manageability, and political feasibility. Our analysis suggests three principal ways that artificial discretion can improve administrative discretion at the task level: (1) increasing scalability, (2) decreasing cost, and (3) improving quality. At the same time, artificial discretion raises serious concerns with respect to equity, manageability, and political feasibility….(More)”.

Dissecting racial bias in an algorithm used to manage the health of populations


Paper by Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan in Science: “Health systems rely on commercial prediction algorithms to identify and help patients with complex health needs. We show that a widely used algorithm, typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias: At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses. Remedying this disparity would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%. The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients. Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise. We suggest that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts….(More)”.

New Zealand launches draft algorithm charter for government agencies


Mia Hunt at Global Government Forum: “The New Zealand government has launched a draft ‘algorithm charter’ that sets out how agencies should analyse data in a way that is fair, ethical and transparent.

The charter, which is open for public consultation, sets out 10 points that agencies would have to adhere to. These include pledging to explain how significant decisions are informed by algorithms or, where it cannot – for national security reasons, for example – explain the reason; taking into account the perspectives of communities, such as LGBTQI+, Pacific islanders and people with disabilities; and identifying and consulting with groups or stakeholders with an interest in algorithm development.

Agencies would also have to publish information about how data is collected and stored; use tools and processes to ensure that privacy, ethics, and human rights considerations are integrated as part of algorithm development and procurement; and periodically assess decisions made by algorithms for unintended bias.

They would commit to implementing a “robust” peer-review process, and have to explain clearly who is responsible for automated decisions and what methods exist for challenge or appeal “via a human”….

The charter – which fits on a single page, and is designed to be simple and easily understood – explains that algorithms are a “fundamental element” of data analytics, which supports public services and delivers “new, innovative and well-targeted” policies aims.

The charter begins: “In a world where technology is moving rapidly, and artificial intelligence is on the rise, it’s essential that government has the right safeguards in place when it uses public data for decision-making. The government must ensure that data ethics are embedded in its work, and always keep in mind the people and communities being served by these tools.”

It says Stats NZ, the country’s official data agency, is “committed to transparent and accountable use of operational algorithms and other advanced data analytics techniques that inform decisions significantly impacting on individuals or groups”….(More)”.

The Economics of Artificial Intelligence


Book edited by Ajay Agrawal, Joshua Gans and Avi Goldfarb: “Advances in artificial intelligence (AI) highlight the potential of this technology to affect productivity, growth, inequality, market power, innovation, and employment. This volume seeks to set the agenda for economic research on the impact of AI.

It covers four broad themes: AI as a general purpose technology; the relationships between AI, growth, jobs, and inequality; regulatory responses to changes brought on by AI; and the effects of AI on the way economic research is conducted. It explores the economic influence of machine learning, the branch of computational statistics that has driven much of the recent excitement around AI, as well as the economic impact of robotics and automation and the potential economic consequences of a still-hypothetical artificial general intelligence. The volume provides frameworks for understanding the economic impact of AI and identifies a number of open research questions…. (More)”

Ethical guidelines issued by engineers’ organization fail to gain traction


Blogpost by Nicolas Kayser-Bril: “In early 2016, the Institute of Electrical and Electronics Engineers, a professional association known as IEEE, launched a “global initiative to advance ethics in technology.” After almost three years of work and multiple rounds of exchange with experts on the topic, it released last April the first edition of Ethically Aligned Design, a 300-page treatise on the ethics of automated systems.

The general principles issued in the report focus on transparency, human rights and accountability, among other topics. As such, they are not very different from the 83 other ethical guidelines that researchers from the Health Ethics and Policy Lab of the Swiss Federal Institute of Technology in Zurich reviewed in an article published in Nature Machine Intelligence in September. However, one key aspect makes IEEE different from other think-tanks. With over 420,000 members, it is the world’s largest engineers’ association with roots reaching deep into Silicon Valley. Vint Cerf, one of Google’s Vice Presidents, is an IEEE “life fellow.”

Because the purpose of the IEEE principles is to serve as a “key reference for the work of technologists”, and because many technologists contributed to their conception, we wanted to know how three technology companies, Facebook, Google and Twitter, were planning to implement them.

Transparency and accountability

Principle number 5, for instance, requires that the basis of a particular automated decision be “discoverable”. On Facebook and Instagram, the reasons why a particular item is shown on a user’s feed are all but discoverable. Facebook’s “Why You’re Seeing This Post” feature explains that “many factors” are involved in the decision to show a specific item. The help page designed to clarify the matter fails to do so: many sentences there use opaque wording (users are told that “some things influence ranking”, for instance) and the basis of the decisions governing their newsfeeds are impossible to find.

Principle number 6 states that any autonomous system shall “provide an unambiguous rationale for all decisions made.” Google’s advertising systems do not provide an unambiguous rationale when explaining why a particular advert was shown to a user. A click on “Why This Ad” states that an “ad may be based on general factors … [and] information collected by the publisher” (our emphasis). Such vagueness is antithetical to the requirement for explicitness.

AlgorithmWatch sent detailed letters (which you can read below this article) with these examples and more, asking Google, Facebook and Twitter how they planned to implement the IEEE guidelines. This was in June. After a great many emails, phone calls and personal meetings, only Twitter answered. Google gave a vague comment and Facebook promised an answer which never came…(More)”