Digital Decisions Tool


Center for Democracy and Technology (CDT): “Two years ago, CDT embarked on a project to explore what we call “digital decisions” – the use of algorithms, machine learning, big data, and automation to make decisions that impact individuals and shape society. Industry and government are applying algorithms and automation to problems big and small, from reminding us to leave for the airport to determining eligibility for social services and even detecting deadly diseases. This new era of digital decision-making has created a new challenge: ensuring that decisions made by computers reflect values like equality, democracy, and justice. We want to ensure that big data and automation are used in ways that create better outcomes for everyone, and not in ways that disadvantage minority groups.

The engineers and product managers who design these systems are the first line of defense against unfair, discriminatory, and harmful outcomes. To help mitigate harm at the design level, we have launched the first public version of our digital decisions tool. We created the tool to help developers understand and mitigate unintended bias and ethical pitfalls as they design automated decision-making systems.

About the digital decisions tool

This interactive tool translates principles for fair and ethical automated decision-making into a series of questions that can be addressed during the process of designing and deploying an algorithm. The questions address developers’ choices, such as what data to use to train an algorithm, what factors or features in the data to consider, and how to test the algorithm. They also ask about the systems and checks in place to assess risk and ensure fairness. These questions should provoke thoughtful consideration of the subjective choices that go into building an automated decision-making system and how those choices could result in disparate outcomes and unintended harms.

The tool is informed by extensive research by CDT and others about how algorithms and machine learning work, how they’re used, the potential risks of using them to make important decisions, and the principles that civil society has developed to ensure that digital decisions are fair, ethical, and respect civil rights. Some of this research is summarized on CDT’s Digital Decisions webpage….(More)”.

Rage against the machines: is AI-powered government worth it?


Maëlle Gavet at the WEF: “…the Australian government’s new “data-driven profiling” trial for drug testing welfare recipients, to US law enforcement’s use of facial recognition technology and the deployment of proprietary software in sentencing in many US courts … almost by stealth and with remarkably little outcry, technology is transforming the way we are policed, categorized as citizens and, perhaps one day soon, governed. We are only in the earliest stages of so-called algorithmic regulation — intelligent machines deploying big data, machine learning and artificial intelligence (AI) to regulate human behaviour and enforce laws — but it already has profound implications for the relationship between private citizens and the state….

Some may herald this as democracy rebooted. In my view it represents nothing less than a threat to democracy itself — and deep scepticism should prevail. There are five major problems with bringing algorithms into the policy arena:

  1. Self-reinforcing bias…
  2. Vulnerability to attack…
  3. Who’s calling the shots?…
  4. Are governments up to it?…
  5. Algorithms don’t do nuance….

All the problems notwithstanding, there’s little doubt that AI-powered government of some kind will happen. So, how can we avoid it becoming the stuff of bad science fiction? To begin with, we should leverage AI to explore positive alternatives instead of just applying it to support traditional solutions to society’s perceived problems. Rather than simply finding and sending criminals to jail faster in order to protect the public, how about using AI to figure out the effectiveness of other potential solutions? Offering young adult literacy, numeracy and other skills might well represent a far superior and more cost-effective solution to crime than more aggressive law enforcement. Moreover, AI should always be used at a population level, rather than at the individual level, in order to avoid stigmatizing people on the basis of their history, their genes and where they live. The same goes for the more subtle, yet even more pervasive data-driven targeting by prospective employers, health insurers, credit card companies and mortgage providers. While the commercial imperative for AI-powered categorization is clear, when it targets individuals it amounts to profiling with the inevitable consequence that entire sections of society are locked out of opportunity….(More)”.

Digital transformation’s people problem


Jen Kelchner at open source: …Arguably, the greatest chasm we see in our organizational work today is the actual transformation before, during, or after the implementation of a digital technology—because technology invariably crosses through and impacts people, processes, and culture. What are we transforming from? What are we transforming into? These are “people issues” as much as they are “technology issues,” but we too rarely acknowledge this.

Operating our organizations on open principles promises to spark new ways of thinking that can help us address this gap. Over the course of this three-part series, we’ll take a look at how the principle foundations of open play a major role in addressing the “people part” of digital transformation—and closing that gap before and during implementations.

The impact of digital transformation

The meaning of the term “digital transformation” has changed considerably in the last decade. For example, if you look at where organizations were in 2007, you’d watch them grapple with the first iPhone. Focus here was more on search engines, data mining, and methods of virtual collaboration.

A decade later in 2017, however, we’re investing in artificial intelligence, machine learning, and the Internet of Things. Our technologies have matured—but our organizational and cultural structures have not kept pace with them.

Value Co-creation In The Organizations of the Future, a recent research report from Aalto University, states that digital transformation has created opportunities to revolutionize and change existing business models, socioeconomic structures, legal and policy measures, organizational patterns, and cultural barriers. But we can only realize this potential if we address both the technological and the organizational aspects of digital transformation.

Four critical areas of digital transformation

Let’s examine four crucial elements involved in any digital transformation effort:

  • change management
  • the needs of the ecosystem
  • processes
  • silos

Any organization must address these four elements in advance of (ideally) or in conjunction with the implementation of a new technology if that organization is going to realize success and sustainability….(More)”.

We have unrealistic expectations of a tech-driven future utopia


Bob O’Donnell in RECODE: “No one likes to think about limits, especially in the tech industry, where the idea of putting constraints on almost anything is perceived as anathema.

In fact, the entire tech industry is arguably built on the concept of bursting through limitations and enabling things that weren’t possible before. New technology developments have clearly created incredible new capabilities and opportunities, and have generally helped improve the world around us.

But there does come a point — and I think we’ve arrived there — where it’s worth stepping back to both think about and talk about the potential value of, yes, technology limits … on several different levels.

On a technical level, we’ve reached a point where advances in computing applications like AI, or medical applications like gene splicing, are raising even more ethical questions than practical ones on issues such as how they work and for what applications they might be used. Not surprisingly, there aren’t any clear or easy answers to these questions, and it’s going to take a lot more time and thought to create frameworks or guidelines for both the appropriate and inappropriate uses of these potentially life-changing technologies.

Does this mean these kinds of technological advances should be stopped? Of course not. But having more discourse on the types of technologies that get created and released certainly needs to happen.

 Even on a practical level, the need for limiting people’s expectations about what a technology can or cannot do is becoming increasingly important. With science-fiction-like advances becoming daily occurrences, it’s easy to fall into the trap that there are no limits to what a given technology can do. As a result, people are increasingly willing to believe and accept almost any kind of statements or predictions about the future of many increasingly well-known technologies, from autonomous driving to VR to AI and machine learning. I hate to say it, but it’s the fake news of tech.

Just as we’ve seen the fallout from fake news on all sides of the political perspective, so, too, are we starting to see that unbridled and unlimited expectations for certain new technologies are starting to have negative implications of their own. Essentially, we’re starting to build unrealistic expectations for a tech-driven nirvana that doesn’t clearly jibe with the realities of the modern world, particularly in the time frames that are often discussed….(More)”.

The DeepMind debacle demands dialogue on data


Hetan Shah in Nature: “Without public approval, advances in how we use data will stall. That is why a regulator’s ruling against the operator of three London hospitals is about more than mishandling records from 1.6 million patients. It is a missed opportunity to have a conversation with the public about appropriate uses for their data….

What can be done to address this deficit? Beyond meeting legal standards, all relevant institutions must take care to show themselves trustworthy in the eyes of the public. The lapses of the Royal Free hospitals and DeepMind provide, by omission, valuable lessons.

The first is to be open about what data are transferred. The extent of data transfer between the Royal Free and DeepMind came to light through investigative journalism. In my opinion, had the project proceeded under open contracting, it would have been subject to public scrutiny, and to questions about whether a company owned by Google — often accused of data monopoly — was best suited to create a relatively simple app.

The second lesson is that data transfer should be proportionate to the task. Information-sharing agreements should specify clear limits. It is unclear why an app for kidney injury requires the identifiable records of every patient seen by three hospitals over a five-year period.

Finally, governance mechanisms must be strengthened. It is shocking to me that the Royal Free did not assess the privacy impact of its actions before handing over access to records. DeepMind does deserve credit for (belatedly) setting up an independent review panel for health-care projects, especially because the panel has a designated budget and has not required members to sign non-disclosure agreements. (The two groups also agreed a new contract late last year, after criticism.)

More is needed. The Information Commissioner asked the Royal Free to improve its processes but did not fine it or require it to rescind data. This rap on the knuckles is unlikely to deter future, potentially worse, misuses of data. People are aware of the potential for over-reach, from the US government’s demands for state voter records to the Chinese government’s alleged plans to create a ‘social credit’ system that would monitor private behaviour.

Innovations such as artificial intelligence, machine learning and the Internet of Things offer great opportunities, but will falter without a public consensus around the role of data. To develop this, all data collectors and crunchers must be open and transparent. Consider how public confidence in genetic modification was lost in Europe, and how that has set back progress.

Public dialogue can build trust through collaborative efforts. A 14-member Citizen’s Reference Panel on health technologies was convened in Ontario, Canada in 2009. The Engage2020 programme incorporates societal input in the Horizon2020 stream of European Union science funding….(More)”

Lessons from Airbnb and Uber to Open Government as a Platform


Interview by Marquis Cabrera with Sangeet Paul Choudary: “…Platform companies have a very strong core built around data, machine learning, and a central infrastructure. But they rapidly innovate around it to try and test new things in the market and that helps them open themselves for further innovation in the ecosystem. Governments can learn to become more modular and more agile, the way platform companies are. Modularity in architecture is a very fundamental part of being a platform company; both in terms of your organizational architecture, as well as your business model architecture.

The second thing that governments can learn from a platform company is that successful platform companies are created with intent. They are not created by just opening out what you have available. If you look at the current approach of applying platform thinking in government, a common approach is just to take data and open it out to the world. However, successful platform companies first create a shaping strategy to shape-out and craft a direction of vision for the ecosystem in terms of what they can achieve by being on the platform. They then provision the right tools and services that serve the vision to enable success for the ecosystem[1] . And only then do they open up their infrastructure. It’s really important that you craft the right shaping strategy and use that to define the rights tools and services before you start pursuing a platform implementation.

In my work with governments, I regularly find myself stressing the importance of thinking as a market maker rather than as a service provider. Governments have always been market makers but when it comes to technology, they often take the service provider approach.

In your book, you used San Francisco City Government and Data.gov as examples of infusing platform thinking in government. But what are some global examples of governments, countries infusing platform thinking around the world?

One of the best examples is from my home country Singapore, which has been at the forefront of converting the nation into a platform. It has now been pursuing platform strategy both overall as a nation by building a smart nation platform, and also within verticals. If you look particularly at mobility and transportation, it has worked to create a central core platform and then build greater autonomy around how mobility and transportation works in the country. Other good examples of governments applying this are Dubai, South Korea, Barcelona; they are all countries and cities that have applied the concept of platforms very well to create a smart nation platform. India is another example that is applying platform thinking with the creation of the India stack, though the implementation could benefit from better platform governance structures and a more open regulation around participation….(More)”.

Volunteers teach AI to spot slavery sites from satellite images


This data will then be used to train machine learning algorithms to automatically recognise brick kilns in satellite imagery. If computers can pinpoint the location of such possible slavery sites, then the coordinates could be passed to local charities to investigate, says Kevin Bales, the project leader, at the University of Nottingham, UK.

South Asian brick kilns are notorious as modern-day slavery sites. There are an estimated 5 million people working in brick kilns in South Asia, and of those nearly 70 per cent are thought to be working there under duress – often to pay off financial debts.

 However, no one is quite sure how many of these kilns there are in the so-called “Brick Belt”, a region that stretches across parts of Pakistan, India and Nepal. Some estimates put the figure at 20,000, but it may be as high as 50,000.

Bales is hoping that his machine learning approach will produce a more accurate figure and help organisations on the ground know where to direct their anti-slavery efforts.

It’s great to have a tool for identifying possible forced labour sites, says Sasha Jesperson at St Mary’s University in London. But it is just a start – to really find out how many people are being enslaved in the brick kiln industry, investigators still need to visit every site and work out exactly what’s going on there, she says….

So far, volunteers have identified over 4000 potential slavery sites across 400 satellite images taken via Google Earth. Once these have been checked several times by volunteers, Bales plans to use these images to teach the machine learning algorithm what kilns look like, so that it can learn to recognise them in images automatically….(More)”.

A.I. experiments (with Google)


About: “With all the exciting A.I. stuff happening, there are lots of people eager to start tinkering with machine learning technology. A.I. Experiments is a showcase for simple experiments that let anyone play with this technology in hands-on ways, through pictures, drawings, language, music, and more.

Submit your own

We want to make it easier for any coder – whether you have a machine learning background or not – to create your own experiments. This site includes open-source code and resources to help you get started. If you make something you’d like to share, we’d love to see it and possibly add it to the showcase….(More)”

Big Data, Data Science, and Civil Rights


Paper by Solon Barocas, Elizabeth Bradley, Vasant Honavar, and Foster Provost:  “Advances in data analytics bring with them civil rights implications. Data-driven and algorithmic decision making increasingly determine how businesses target advertisements to consumers, how police departments monitor individuals or groups, how banks decide who gets a loan and who does not, how employers hire, how colleges and universities make admissions and financial aid decisions, and much more. As data-driven decisions increasingly affect every corner of our lives, there is an urgent need to ensure they do not become instruments of discrimination, barriers to equality, threats to social justice, and sources of unfairness. In this paper, we argue for a concrete research agenda aimed at addressing these concerns, comprising five areas of emphasis: (i) Determining if models and modeling procedures exhibit objectionable bias; (ii) Building awareness of fairness into machine learning methods; (iii) Improving the transparency and control of data- and model-driven decision making; (iv) Looking beyond the algorithm(s) for sources of bias and unfairness—in the myriad human decisions made during the problem formulation and modeling process; and (v) Supporting the cross-disciplinary scholarship necessary to do all of that well…(More)”.

Slave to the Algorithm? Why a ‘Right to Explanation’ is Probably Not the Remedy You are Looking for


Paper by Lilian Edwards and Michael Veale: “Algorithms, particularly of the machine learning (ML) variety, are increasingly consequential to individuals’ lives but have caused a range of concerns evolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively presents as a means to “open the black box”, hence allowing individual challenge and redress, as well as possibilities to foster accountability of ML systems. In the general furore over algorithmic bias and other issues laid out in section 2, any remedy in a storm has looked attractive.

However, we argue that a right to an explanation in the GDPR is unlikely to be a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. We present several reasons for this conclusion. First (section 3), the law is restrictive on when any explanation-related right can be triggered, and in many places is unclear, or even seems paradoxical. Second (section 4), even were some of these restrictions to be navigated, the way that explanations are conceived of legally — as “meaningful information about the logic of processing” — is unlikely to be provided by the kind of ML “explanations” computer scientists have been developing. ML explanations are restricted both by the type of explanation sought, the multi-dimensionality of the domain and the type of user seeking an explanation. However (section 5) “subject-centric” explanations (SCEs), which restrict explanations to particular regions of a model around a query, show promise for interactive exploration, as do pedagogical rather than decompositional explanations in dodging developers’ worries of IP or trade secrets disclosure.

As an interim conclusion then, while convinced that recent research in ML explanations shows promise, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy”. However, in our final section, we argue that other parts of the GDPR related (i) to other individual rights including the right to erasure (“right to be forgotten”) and the right to data portability and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds of building a better, more respectful and more user-friendly algorithmic society….(More)”