Artificial Intelligence and Public Policy


Paper by Adam D. ThiererAndrea Castillo and Raymond Russell: “There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later…(More)”.

Patient Power: Crowdsourcing in Cancer


Bonnie J. Addario at the HuffPost: “…Understanding how to manage and manipulate vast sums of medical data to improve research and treatments has become a top priority in the cancer enterprise. Researchers at the University of North Carolina Chapel Hill are using IBM’s Watson and its artificial intelligence computing power to great effect. Dr. Norman Sharpless told Charlie Rose from CBS’ 60 Minutes that Watson is reading tens of millions of medical papers weekly (8,000 new cancer research papers are published every day) and regularly scanning the web for new clinical trials most people, including researchers, are unaware of. The task is “essentially undoable” he said, for even the best, well-informed experts.

UNC’s effort is truly wonderful albeit a macro approach, less tailored and accessible only to certain medical centers. My experience tells me what the real problem is: How does a patient newly diagnosed with lung cancer, fragile and scared find the most relevant information without being overwhelmed and giving up? If the experts can’t easily find key data without Watson’s help, and Google’s first try turns up millions upon millions of semi-useful results, how do we build hope that there are good online answers for our patients?

We’ve thought about this a lot at the Addario Lung Cancer Foundation and figured out that the answer lies with the patients themselves. Why not crowdsource it with people who have lung cancer, their caregivers and family members?

So, we created the first-ever global Lung Cancer Patient Registry that simplifies the collection, management and distribution of critical health-related information – all in one place so that researchers and patients can easily access and find data specific to lung cancer patients.

This is a data-rich environment for those focusing solely on finding a cure for lung cancer. And it gives patients access to other patients to compare notes and generally feel safe sharing intimate details with their peers….(More)”

Automation Beyond the Physical: AI in the Public Sector


Ben Miller at Government Technology: “…The technology is, by nature, broadly applicable. If a thing involves data — “data” itself being a nebulous word — then it probably has room for AI. AI can help manage the data, analyze it and find patterns that humans might not have thought of. When it comes to big data, or data sets so big that they become difficult for humans to manually interact with, AI leverages the speedy nature of computing to find relationships that might otherwise be proverbial haystack needles.

One early area of government application is in customer service chatbots. As state and local governments started putting information on websites in the past couple of decades, they found that they could use those portals as a means of answering questions that constituents used to have to call an office to ask.

Ideally that results in a cyclical victory: Government offices didn’t have as many calls to answer, so they could devote more time and resources to other functions. And when somebody did call in, their call might be answered faster.

With chatbots, governments are betting they can answer even more of those questions. When he was the chief technology and innovation officer of North Carolina, Eric Ellis oversaw the setup of a system that did just that for IT help desk calls.

Turned out, more than 80 percent of the help desk’s calls were people who wanted to change their passwords. For something like that, where the process is largely the same each time, a bot can speed up the process with a little help from AI. Then, just like with the government Web portal, workers are freed up to respond to the more complicated calls faster….

Others are using AI to recognize and report objects in photographs and videos — guns, waterfowl, cracked concrete, pedestrians, semi-trucks, everything. Others are using AI to help translate between languages dynamically. Some want to use it to analyze the tone of emails. Some are using it to try to keep up with cybersecurity threats even as they morph and evolve. After all, if AI can learn to beat professional poker players, then why can’t it learn how digital black hats operate?

Castro sees another use for the technology, a more introspective one. The problem is this: The government workforce is a lot older than the private sector, and that can make it hard to create culture change. According to U.S. Census Bureau data, about 27 percent of public-sector workers are millennials, compared with 38 percent in the private sector.

“The traditional view [of government work] is you fill out a lot of forms, there are a lot of boring meetings. There’s a lot of bureaucracy in government,” Castro said. “AI has the opportunity to change a lot of that, things like filling out forms … going to routine meetings and stuff.”

As AI becomes more and more ubiquitous, people who work both inside and with government are coming up with an ever-expanding list of ways to use it. Here’s an inexhaustive list of specific use cases — some of which are already up and running and some of which are still just ideas….(More)”.

How to Regulate Artificial Intelligence


Oren Etzioni in the New York Times: “…we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I.

I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own.

First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties.

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford….

My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information…(More)”

Can AI tools replace feds?


Derek B. Johnson at FCW: “The Heritage Foundation…is calling for increased reliance on automation and the potential creation of a “contractor cloud” offering streamlined access to private sector labor as part of its broader strategy for reorganizing the federal government.

Seeking to take advantage of a united Republican government and a president who has vowed to reform the civil service, the foundation drafted a pair of reports this year attempting to identify strategies for consolidating, merging or eliminating various federal agencies, programs and functions. Among those strategies is a proposal for the Office of Management and Budget to issue a report “examining existing government tasks performed by generously-paid government employees that could be automated.”

Citing research on the potential impacts of automation on the United Kingdom’s civil service, the foundation’s authors estimated that similar efforts across the U.S. government could yield $23.9 billion in reduced personnel costs and a reduction in the size of the federal workforce by 288,000….

The Heritage report also called on the federal government to consider a “contracting cloud.” The idea would essentially be for a government version of TaskRabbit, where agencies could select from a pool of pre-approved individual contractors from the private sector who could be brought in for specialized or seasonal work without going through established contracts. Greszler said the idea came from speaking with subcontractors who complained about having to kick over a certain percentage of their payments to prime contractors even as they did all the work.

Right now the foundation is only calling for the government to examine the potential of the issue and how it would interact with existing or similar vehicles for contracting services like the GSA schedule. Greszler emphasized that any pool of workers would need to be properly vetted to ensure they met federal standards and practices.

“There has to be guidelines or some type of checks, so you’re not having people come off the street and getting access to secure government data,” she said….(More)

Artificial Intelligence for Citizen Services and Government


Paper by Hila Mehr: “From online services like Netflix and Facebook, to chatbots on our phones and in our homes like Siri and Alexa, we are beginning to interact with artificial intelligence (AI) on a near daily basis. AI is the programming or training of a computer to do tasks typically reserved for human intelligence, whether it is recommending which movie to watch next or answering technical questions. Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services.

While the potential future use cases of AI in government remain bounded by government resources and the limits of both human creativity and trust in government, the most obvious and immediately beneficial opportunities are those where AI can reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks. Many AI case studies in citizen services today fall into five categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. These applications could make government work more efficient while freeing up time for employees to build better relationships with citizens. With citizen satisfaction with digital government offerings leaving much to be desired, AI may be one way to bridge the gap while improving citizen engagement and service delivery.

Despite the clear opportunities, AI will not solve systemic problems in government, and could potentially exacerbate issues around service delivery, privacy, and ethics if not implemented thoughtfully and strategically. Agencies interested in implementing AI can learn from previous government transformation efforts, as well as private-sector implementation of AI. Government offices should consider these six strategies for applying AI to their work: make AI a part of a goals-based, citizen-centric program; get citizen input; build upon existing resources; be data-prepared and tread carefully with privacy; mitigate ethical risks and avoid AI decision making; and, augment employees, do not replace them.

This paper explores the various types of AI applications, and current and future uses of AI in government delivery of citizen services, with a focus on citizen inquiries and information. It also offers strategies for governments as they consider implementing AI….(More)”

Digital Decisions Tool


Center for Democracy and Technology (CDT): “Two years ago, CDT embarked on a project to explore what we call “digital decisions” – the use of algorithms, machine learning, big data, and automation to make decisions that impact individuals and shape society. Industry and government are applying algorithms and automation to problems big and small, from reminding us to leave for the airport to determining eligibility for social services and even detecting deadly diseases. This new era of digital decision-making has created a new challenge: ensuring that decisions made by computers reflect values like equality, democracy, and justice. We want to ensure that big data and automation are used in ways that create better outcomes for everyone, and not in ways that disadvantage minority groups.

The engineers and product managers who design these systems are the first line of defense against unfair, discriminatory, and harmful outcomes. To help mitigate harm at the design level, we have launched the first public version of our digital decisions tool. We created the tool to help developers understand and mitigate unintended bias and ethical pitfalls as they design automated decision-making systems.

About the digital decisions tool

This interactive tool translates principles for fair and ethical automated decision-making into a series of questions that can be addressed during the process of designing and deploying an algorithm. The questions address developers’ choices, such as what data to use to train an algorithm, what factors or features in the data to consider, and how to test the algorithm. They also ask about the systems and checks in place to assess risk and ensure fairness. These questions should provoke thoughtful consideration of the subjective choices that go into building an automated decision-making system and how those choices could result in disparate outcomes and unintended harms.

The tool is informed by extensive research by CDT and others about how algorithms and machine learning work, how they’re used, the potential risks of using them to make important decisions, and the principles that civil society has developed to ensure that digital decisions are fair, ethical, and respect civil rights. Some of this research is summarized on CDT’s Digital Decisions webpage….(More)”.

Algorithmic Transparency for the Smart City


Paper by Robert Brauneis and Ellen P. Goodman: “Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser.

We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness.

To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain….(More)”.

Opportunities and risks in emerging technologies


White Paper Series at the WebFoundation: “To achieve our vision of digital equality, we need to understand how new technologies are shaping society; where they present opportunities to make people’s lives better, and indeed where they threaten to create harm. To this end, we have commissioned a series of white papers examining three key digital trends: artificial intelligence, algorithms and control of personal data. The papers focus on low and middle-income countries, which are all too often overlooked in debates around the impacts of emerging technologies.

The series addresses each of these three digital issues, looking at how they are impacting people’s lives and identifying steps that governments, companies and civil society organisations can take to limit the harms, and maximise benefits, for citizens.

Download the white papers

We will use these white papers to refine our thinking and set our work agenda on digital equality in the years ahead. We are sharing them openly with the hope they benefit others working towards our goals and to amplify the limited research currently available on digital issues in low and middle-income countries. We intend the papers to foster discussion about the steps we can take together to ensure emerging digital technologies are used in ways that benefit people’s lives, whether they are in Los Angeles or Lagos….(More)”.

Rage against the machines: is AI-powered government worth it?


Maëlle Gavet at the WEF: “…the Australian government’s new “data-driven profiling” trial for drug testing welfare recipients, to US law enforcement’s use of facial recognition technology and the deployment of proprietary software in sentencing in many US courts … almost by stealth and with remarkably little outcry, technology is transforming the way we are policed, categorized as citizens and, perhaps one day soon, governed. We are only in the earliest stages of so-called algorithmic regulation — intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws — but it already has profound implications for the relationship between private citizens and the state….

Some may herald this as democracy rebooted. In my view it represents nothing less than a threat to democracy itself — and deep scepticism should prevail. There are five major problems with bringing algorithms into the policy arena:

  1. Self-reinforcing bias…
  2. Vulnerability to attack…
  3. Who’s calling the shots?…
  4. Are governments up to it?…
  5. Algorithms don’t do nuance….

All the problems notwithstanding, there’s little doubt that AI-powered government of some kind will happen. So, how can we avoid it becoming the stuff of bad science fiction? To begin with, we should leverage AI to explore positive alternatives instead of just applying it to support traditional solutions to society’s perceived problems. Rather than simply finding and sending criminals to jail faster in order to protect the public, how about using AI to figure out the effectiveness of other potential solutions? Offering young adult literacy, numeracy and other skills might well represent a far superior and more cost-effective solution to crime than more aggressive law enforcement. Moreover, AI should always be used at a population level, rather than at the individual level, in order to avoid stigmatizing people on the basis of their history, their genes and where they live. The same goes for the more subtle, yet even more pervasive data-driven targeting by prospective employers, health insurers, credit card companies and mortgage providers. While the commercial imperative for AI-powered categorization is clear, when it targets individuals it amounts to profiling with the inevitable consequence that entire sections of society are locked out of opportunity….(More)”.