Can AI tools replace feds?


Derek B. Johnson at FCW: “The Heritage Foundation…is calling for increased reliance on automation and the potential creation of a “contractor cloud” offering streamlined access to private sector labor as part of its broader strategy for reorganizing the federal government.

Seeking to take advantage of a united Republican government and a president who has vowed to reform the civil service, the foundation drafted a pair of reports this year attempting to identify strategies for consolidating, merging or eliminating various federal agencies, programs and functions. Among those strategies is a proposal for the Office of Management and Budget to issue a report “examining existing government tasks performed by generously-paid government employees that could be automated.”

Citing research on the potential impacts of automation on the United Kingdom’s civil service, the foundation’s authors estimated that similar efforts across the U.S. government could yield $23.9 billion in reduced personnel costs and a reduction in the size of the federal workforce by 288,000….

The Heritage report also called on the federal government to consider a “contracting cloud.” The idea would essentially be for a government version of TaskRabbit, where agencies could select from a pool of pre-approved individual contractors from the private sector who could be brought in for specialized or seasonal work without going through established contracts. Greszler said the idea came from speaking with subcontractors who complained about having to kick over a certain percentage of their payments to prime contractors even as they did all the work.

Right now the foundation is only calling for the government to examine the potential of the issue and how it would interact with existing or similar vehicles for contracting services like the GSA schedule. Greszler emphasized that any pool of workers would need to be properly vetted to ensure they met federal standards and practices.

“There has to be guidelines or some type of checks, so you’re not having people come off the street and getting access to secure government data,” she said….(More)

Artificial Intelligence for Citizen Services and Government


Paper by Hila Mehr: “From online services like Netflix and Facebook, to chatbots on our phones and in our homes like Siri and Alexa, we are beginning to interact with artificial intelligence (AI) on a near daily basis. AI is the programming or training of a computer to do tasks typically reserved for human intelligence, whether it is recommending which movie to watch next or answering technical questions. Soon, AI will permeate the ways we interact with our government, too. From small cities in the US to countries like Japan, government agencies are looking to AI to improve citizen services.

While the potential future use cases of AI in government remain bounded by government resources and the limits of both human creativity and trust in government, the most obvious and immediately beneficial opportunities are those where AI can reduce administrative burdens, help resolve resource allocation problems, and take on significantly complex tasks. Many AI case studies in citizen services today fall into five categories: answering questions, filling out and searching documents, routing requests, translation, and drafting documents. These applications could make government work more efficient while freeing up time for employees to build better relationships with citizens. With citizen satisfaction with digital government offerings leaving much to be desired, AI may be one way to bridge the gap while improving citizen engagement and service delivery.

Despite the clear opportunities, AI will not solve systemic problems in government, and could potentially exacerbate issues around service delivery, privacy, and ethics if not implemented thoughtfully and strategically. Agencies interested in implementing AI can learn from previous government transformation efforts, as well as private-sector implementation of AI. Government offices should consider these six strategies for applying AI to their work: make AI a part of a goals-based, citizen-centric program; get citizen input; build upon existing resources; be data-prepared and tread carefully with privacy; mitigate ethical risks and avoid AI decision making; and, augment employees, do not replace them.

This paper explores the various types of AI applications, and current and future uses of AI in government delivery of citizen services, with a focus on citizen inquiries and information. It also offers strategies for governments as they consider implementing AI….(More)”

Digital Decisions Tool


Center for Democracy and Technology (CDT): “Two years ago, CDT embarked on a project to explore what we call “digital decisions” – the use of algorithms, machine learning, big data, and automation to make decisions that impact individuals and shape society. Industry and government are applying algorithms and automation to problems big and small, from reminding us to leave for the airport to determining eligibility for social services and even detecting deadly diseases. This new era of digital decision-making has created a new challenge: ensuring that decisions made by computers reflect values like equality, democracy, and justice. We want to ensure that big data and automation are used in ways that create better outcomes for everyone, and not in ways that disadvantage minority groups.

The engineers and product managers who design these systems are the first line of defense against unfair, discriminatory, and harmful outcomes. To help mitigate harm at the design level, we have launched the first public version of our digital decisions tool. We created the tool to help developers understand and mitigate unintended bias and ethical pitfalls as they design automated decision-making systems.

About the digital decisions tool

This interactive tool translates principles for fair and ethical automated decision-making into a series of questions that can be addressed during the process of designing and deploying an algorithm. The questions address developers’ choices, such as what data to use to train an algorithm, what factors or features in the data to consider, and how to test the algorithm. They also ask about the systems and checks in place to assess risk and ensure fairness. These questions should provoke thoughtful consideration of the subjective choices that go into building an automated decision-making system and how those choices could result in disparate outcomes and unintended harms.

The tool is informed by extensive research by CDT and others about how algorithms and machine learning work, how they’re used, the potential risks of using them to make important decisions, and the principles that civil society has developed to ensure that digital decisions are fair, ethical, and respect civil rights. Some of this research is summarized on CDT’s Digital Decisions webpage….(More)”.

Algorithmic Transparency for the Smart City


Paper by Robert Brauneis and Ellen P. Goodman: “Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser.

We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness.

To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain….(More)”.

Opportunities and risks in emerging technologies


White Paper Series at the WebFoundation: “To achieve our vision of digital equality, we need to understand how new technologies are shaping society; where they present opportunities to make people’s lives better, and indeed where they threaten to create harm. To this end, we have commissioned a series of white papers examining three key digital trends: artificial intelligence, algorithms and control of personal data. The papers focus on low and middle-income countries, which are all too often overlooked in debates around the impacts of emerging technologies.

The series addresses each of these three digital issues, looking at how they are impacting people’s lives and identifying steps that governments, companies and civil society organisations can take to limit the harms, and maximise benefits, for citizens.

Download the white papers

We will use these white papers to refine our thinking and set our work agenda on digital equality in the years ahead. We are sharing them openly with the hope they benefit others working towards our goals and to amplify the limited research currently available on digital issues in low and middle-income countries. We intend the papers to foster discussion about the steps we can take together to ensure emerging digital technologies are used in ways that benefit people’s lives, whether they are in Los Angeles or Lagos….(More)”.

Rage against the machines: is AI-powered government worth it?


Maëlle Gavet at the WEF: “…the Australian government’s new “data-driven profiling” trial for drug testing welfare recipients, to US law enforcement’s use of facial recognition technology and the deployment of proprietary software in sentencing in many US courts … almost by stealth and with remarkably little outcry, technology is transforming the way we are policed, categorized as citizens and, perhaps one day soon, governed. We are only in the earliest stages of so-called algorithmic regulation — intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws — but it already has profound implications for the relationship between private citizens and the state….

Some may herald this as democracy rebooted. In my view it represents nothing less than a threat to democracy itself — and deep scepticism should prevail. There are five major problems with bringing algorithms into the policy arena:

  1. Self-reinforcing bias…
  2. Vulnerability to attack…
  3. Who’s calling the shots?…
  4. Are governments up to it?…
  5. Algorithms don’t do nuance….

All the problems notwithstanding, there’s little doubt that AI-powered government of some kind will happen. So, how can we avoid it becoming the stuff of bad science fiction? To begin with, we should leverage AI to explore positive alternatives instead of just applying it to support traditional solutions to society’s perceived problems. Rather than simply finding and sending criminals to jail faster in order to protect the public, how about using AI to figure out the effectiveness of other potential solutions? Offering young adult literacy, numeracy and other skills might well represent a far superior and more cost-effective solution to crime than more aggressive law enforcement. Moreover, AI should always be used at a population level, rather than at the individual level, in order to avoid stigmatizing people on the basis of their history, their genes and where they live. The same goes for the more subtle, yet even more pervasive data-driven targeting by prospective employers, health insurers, credit card companies and mortgage providers. While the commercial imperative for AI-powered categorization is clear, when it targets individuals it amounts to profiling with the inevitable consequence that entire sections of society are locked out of opportunity….(More)”.

Government innovations and the hype cycle


Danny Buerkli at the Centre for Public Impact: “The Gartner hype cycle tracks how technologies develop from initial conception to productive use. There is much excitement around different methodologies and technologies in the “government innovation” space, but which of these is hyped and which of these is truly productive?

Last year we made some educated guesses and placed ten government innovations along the hype cycle. This year, however, we went for something bigger and better. We created an entirely non-scientific poll and asked respondents to tell us where they thought these same ten government innovations sat on the hype cycle.

The innovations we included were artificial intelligence, blockchain, design thinking, policy labs, behavioural insights, open data, e-government, agile, lean and New Public Management.

Here is what we learned.

  1. For the most part, we’re still in the early days

On average, our respondents don’t think that any of the methods have made it into truly productive use. In fact, for seven out of the ten innovations, the majority of respondents believed that these were indeed still in the “technology trigger” phase.

Assuming that these innovations will steadily make their way along the hype cycle, we should expect a lot more hype (as they enter the “peak of inflated expectations”) and a lot more disappointment (as they descend into the “trough of disillusionment)” going forward. Government innovation advocates should take heed.

  1. Policy Labs are believed to be in “peak of inflated expectations”

This innovation attracted the highest level of disagreement from respondents. While almost two out of five people believe that policy labs are in the “technology trigger” phase, one out of five see them as having already reached the “slope of enlightenment”. On average, however, respondents believe policy labs to be in the “peak of inflated expectations”….

  1. Blockchain is seen as the most nascent government innovation

Our survey respondents rather unanimously believe that blockchain is at the very early stage of the “technology trigger” phase. Given that blockchain is often characterized as a solution in search of a problem, this view may not be surprising. The survey results also indicates that blockchain will have a long way to go before it will be used productively in government, but there are several ways this can be done.

  1. Artificial intelligence inspires a lot of confidence (in some)
  1. New Public Management is – still – overhyped?… (More).

Digital transformation’s people problem


Jen Kelchner at open source: …Arguably, the greatest chasm we see in our organizational work today is the actual transformation before, during, or after the implementation of a digital technology—because technology invariably crosses through and impacts people, processes, and culture. What are we transforming from? What are we transforming into? These are “people issues” as much as they are “technology issues,” but we too rarely acknowledge this.

Operating our organizations on open principles promises to spark new ways of thinking that can help us address this gap. Over the course of this three-part series, we’ll take a look at how the principle foundations of open play a major role in addressing the “people part” of digital transformation—and closing that gap before and during implementations.

The impact of digital transformation

The meaning of the term “digital transformation” has changed considerably in the last decade. For example, if you look at where organizations were in 2007, you’d watch them grapple with the first iPhone. Focus here was more on search engines, data mining, and methods of virtual collaboration.

A decade later in 2017, however, we’re investing in artificial intelligence, machine learning, and the Internet of Things. Our technologies have matured—but our organizational and cultural structures have not kept pace with them.

Value Co-creation In The Organizations of the Future, a recent research report from Aalto University, states that digital transformation has created opportunities to revolutionize and change existing business models, socioeconomic structures, legal and policy measures, organizational patterns, and cultural barriers. But we can only realize this potential if we address both the technological and the organizational aspects of digital transformation.

Four critical areas of digital transformation

Let’s examine four crucial elements involved in any digital transformation effort:

  • change management
  • the needs of the ecosystem
  • processes
  • silos

Any organization must address these four elements in advance of (ideally) or in conjunction with the implementation of a new technology if that organization is going to realize success and sustainability….(More)”.

We have unrealistic expectations of a tech-driven future utopia


Bob O’Donnell in RECODE: “No one likes to think about limits, especially in the tech industry, where the idea of putting constraints on almost anything is perceived as anathema.

In fact, the entire tech industry is arguably built on the concept of bursting through limitations and enabling things that weren’t possible before. New technology developments have clearly created incredible new capabilities and opportunities, and have generally helped improve the world around us.

But there does come a point — and I think we’ve arrived there — where it’s worth stepping back to both think about and talk about the potential value of, yes, technology limits … on several different levels.

On a technical level, we’ve reached a point where advances in computing applications like AI, or medical applications like gene splicing, are raising even more ethical questions than practical ones on issues such as how they work and for what applications they might be used. Not surprisingly, there aren’t any clear or easy answers to these questions, and it’s going to take a lot more time and thought to create frameworks or guidelines for both the appropriate and inappropriate uses of these potentially life-changing technologies.

Does this mean these kinds of technological advances should be stopped? Of course not. But having more discourse on the types of technologies that get created and released certainly needs to happen.

 Even on a practical level, the need for limiting people’s expectations about what a technology can or cannot do is becoming increasingly important. With science-fiction-like advances becoming daily occurrences, it’s easy to fall into the trap that there are no limits to what a given technology can do. As a result, people are increasingly willing to believe and accept almost any kind of statements or predictions about the future of many increasingly well-known technologies, from autonomous driving to VR to AI and machine learning. I hate to say it, but it’s the fake news of tech.

Just as we’ve seen the fallout from fake news on all sides of the political perspective, so, too, are we starting to see that unbridled and unlimited expectations for certain new technologies are starting to have negative implications of their own. Essentially, we’re starting to build unrealistic expectations for a tech-driven nirvana that doesn’t clearly jibe with the realities of the modern world, particularly in the time frames that are often discussed….(More)”.

How AI Is Crunching Big Data To Improve Healthcare Outcomes


PSFK: “The state of your health shouldn’t be a mystery, nor should patients or doctors have to wait long to find answers to pressing medical concerns. In PSFK’s Future of Health Report, we dig deep into the latest in AI, big data algorithms and IoT tools that are enabling a new, more comprehensive overview of patient data collection and analysis. Machine support, patient information from medical records and conversations with doctors are combined with the latest medical literature to help form a diagnosis without detracting from doctor-patient relations.

The impact of improved AI helps patients form a baseline for well-being and is making changes all across the healthcare industry. AI not only streamlines intake processes and reduces processing volume at clinics, it also controls input and diagnostic errors within a patient record, allowing doctors to focus on patient care and communication, rather than data entry. AI also improves pattern recognition and early diagnosis by learning from multiple patient data sets.

By utilizing deep learning algorithms and software, healthcare providers can connect various libraries of medical information and scan databases of medical records, spotting patterns that lead to more accurate detection and greater breadth of efficiency in medical diagnosis and research. IBM Watson, which has previously been used to help identify genetic markers and develop drugs, is applying its neural learning networks to help doctors correctly diagnose heart abnormalities from medical imaging tests. By scanning thousands of images and learning from correct diagnoses, Watson is able to increase diagnostic accuracy, supporting doctors’ cardiac assessments.

Outside of the doctor’s office, AI is also being used to monitor patient vitals to help create a baseline for well-being. By monitoring health on a day-to-day basis, AI systems can alert patients and medical teams to abnormalities or changes from the baseline in real time, increasing positive outcomes. Take xbird, a mobile platform that uses artificial intelligence to help diabetics understand when hypoglycemic attacks will occur. The AI combines personal and environmental data points from over 20 sensors within mobile and wearable devices to create an automated personal diary and cross references it against blood sugar levels. Patients then share this data with their doctors in order to uncover their unique hypoglycemic triggers and better manage their condition.

In China, meanwhile, web provider Baidu has debuted Melody, a chat-based medical assistant that helps individuals communicate their symptoms, learn of possible diagnoses and connect to medical experts….(More)”.