The law is adapting to a software-driven world


 in the Financial Times: “When the investor Marc Andreessen wrote in 2011 that “software is eating the world,” his point was a contentious one. He argued that the boundary between technology companies and the rest of industry was becoming blurred, and that the “information economy” would supplant the physical economy in ways that were not entirely obvious. Six years later, software’s dominance is a fact of life. What it has yet to eat, however, is the law. If almost every sector of society has been exposed to the headwinds of the digital revolution, governments and the legal profession have not. But that is about to change. The rise of complex software systems has led to new legal challenges. Take, for example, the artificial intelligence systems used in self-driving cars. Last year, the US Department of Transportation wrote to Google stating that the government would “interpret ‘driver’ in the context of Google’s described motor-vehicle design” as referring to the car’s artificial intelligence. So what does this mean for the future of law?

It means that regulations traditionally meant to govern the way that humans interact are adapting to a world that has been eaten by software, as Mr Andreessen predicted. And this is about much more than self-driving cars. Complex algorithms are used in mortgage and credit decisions, in the criminal justice and immigration systems and in the realm of national security, to name just a few areas. The outcome of this shift is unlikely to be more lawyers writing more memos. Rather, new laws will start to become more like software — embedded within applications as computer code. As technology evolves, interpreting the law itself will become more like programming software.

But there is more to this shift than technology alone. The fact is that law is both deeply opaque and unevenly accessible. The legal advice required to understand both what our governments are doing, and what our rights are, is only accessible to a select few. Studies suggest, for example, that an estimated 80 per cent of the legal needs of the poor in the US go unmet. To the average citizen, the inner workings of government have become more impenetrable over time. Granted, laws have been murky to average citizens for as long as governments have been around. But the level of disenchantment with institutions and the experts who run them is placing new pressures on governments to change their ways. The relationship between citizens and professionals — from lawyers to bureaucrats to climatologists — has become tinged with scepticism and suspicion. This mistrust is driven by the sense that society is stacked against those at the bottom — that knowledge is power, but that power costs money only a few can afford….(More)”.

Google DeepMind and healthcare in an age of algorithms


Julia Powles and Hal Hodson in Health and Technology: “Data-driven tools and techniques, particularly machine learning methods that underpin artificial intelligence, offer promise in improving healthcare systems and services. One of the companies aspiring to pioneer these advances is DeepMind Technologies Limited, a wholly-owned subsidiary of the Google conglomerate, Alphabet Inc. In 2016, DeepMind announced its first major health project: a collaboration with the Royal Free London NHS Foundation Trust, to assist in the management of acute kidney injury. Initially received with great enthusiasm, the collaboration has suffered from a lack of clarity and openness, with issues of privacy and power emerging as potent challenges as the project has unfolded. Taking the DeepMind-Royal Free case study as its pivot, this article draws a number of lessons on the transfer of population-derived datasets to large private prospectors, identifying critical questions for policy-makers, industry and individuals as healthcare moves into an algorithmic age….(More)”

Regulating by Robot: Administrative Decision Making in the Machine-Learning Era


Paper by Cary Coglianese and David Lehr: “Machine-learning algorithms are transforming large segments of the economy, underlying everything from product marketing by online retailers to personalized search engines, and from advanced medical imaging to the software in self-driving cars. As machine learning’s use has expanded across all facets of society, anxiety has emerged about the intrusion of algorithmic machines into facets of life previously dependent on human judgment. Alarm bells sounding over the diffusion of artificial intelligence throughout the private sector only portend greater anxiety about digital robots replacing humans in the governmental sphere.

A few administrative agencies have already begun to adopt this technology, while others have the clear potential in the near-term to use algorithms to shape official decisions over both rulemaking and adjudication. It is no longer fanciful to envision a future in which government agencies could effectively make law by robot, a prospect that understandably conjures up dystopian images of individuals surrendering their liberty to the control of computerized overlords. Should society be alarmed by governmental use of machine learning applications?

We examine this question by considering whether the use of robotic decision tools by government agencies can pass muster under core, time-honored doctrines of administrative and constitutional law. At first glance, the idea of algorithmic regulation might appear to offend one or more traditional doctrines, such as the nondelegation doctrine, procedural due process, equal protection, or principles of reason-giving and transparency.

We conclude, however, that when machine-learning technology is properly understood, its use by government agencies can comfortably fit within these conventional legal parameters. We recognize, of course, that the legality of regulation by robot is only one criterion by which its use should be assessed. Obviously, agencies should not apply algorithms cavalierly, even if doing so might not run afoul of the law, and in some cases, safeguards may be needed for machine learning to satisfy broader, good-governance aspirations. Yet in contrast with the emerging alarmism, we resist any categorical dismissal of a future administrative state in which key decisions are guided by, and even at times made by, algorithmic automation. Instead, we urge that governmental reliance on machine learning should be approached with measured optimism over the potential benefits such technology can offer society by making government smarter and its decisions more efficient and just….(More)”

Did artificial intelligence deny you credit?


 in The Conversation: “People who apply for a loan from a bank or credit card company, and are turned down, are owed an explanation of why that happened. It’s a good idea – because it can help teach people how to repair their damaged credit – and it’s a federal law, the Equal Credit Opportunity Act. Getting an answer wasn’t much of a problem in years past, when humans made those decisions. But today, as artificial intelligence systems increasingly assist or replace people making credit decisions, getting those explanations has become much more difficult.

Traditionally, a loan officer who rejected an application could tell a would-be borrower there was a problem with their income level, or employment history, or whatever the issue was. But computerized systems that use complex machine learning models are difficult to explain, even for experts.

Consumer credit decisions are just one way this problem arises. Similar concerns exist in health care, online marketing and even criminal justice. My own interest in this area began when a research group I was part of discovered gender bias in how online ads were targeted, but could not explain why it happened.

All those industries, and many others, who use machine learning to analyze processes and make decisions have a little over a year to get a lot better at explaining how their systems work. In May 2018, the new European Union General Data Protection Regulation takes effect, including a section giving people a right to get an explanation for automated decisions that affect their lives. What shape should these explanations take, and can we actually provide them?

Identifying key reasons

One way to describe why an automated decision came out the way it did is to identify the factors that were most influential in the decision. How much of a credit denial decision was because the applicant didn’t make enough money, or because he had failed to repay loans in the past?

My research group at Carnegie Mellon University, including PhD student Shayak Sen and then-postdoc Yair Zick created a way to measure the relative influence of each factor. We call it the Quantitative Input Influence.

In addition to giving better understanding of an individual decision, the measurement can also shed light on a group of decisions: Did an algorithm deny credit primarily because of financial concerns, such as how much an applicant already owes on other debts? Or was the applicant’s ZIP code more important – suggesting more basic demographics such as race might have come into play?…(More)”

Analytics Tools Could Be the Key to Effective Message-Driven Nudging


 in Government Technology: “Appealing to the nuances of the human mind has been a feature of effective governance for as long as governance has existed, appearing prominently in the prescriptions of every great political theorist from Plato to Machiavelli. The most recent and informed iteration of this practice is nudging: leveraging insights about how humans think from behavioral science to create initiatives that encourage desirable behaviors.

Public officials nudge in many ways. Some seek to modify people’s behavior by changing the environments in which they make decisions, for instance moving vegetables to the front of a grocery store to promote healthy eating. Others try to make desirable behaviors easier, like streamlining a city website to make it simpler to sign up for a service. Still others use prompts like email reminders of a deadline to receive a free checkup to nudge people to act wisely by providing useful information.

Thus far, examples of the third type of nudging — direct messaging that prompts behavior — have been decidedly low tech. Typical initiatives have included sending behaviorally informed letters to residents who have not complied with a city code or mailing out postcard reminders to renew license plates. Governments have been attracted to these initiatives for their low cost and proven effectiveness.

While these low-tech nudges should certainly continue, cities’ recent adoption of tools that can mine and analyze data instantaneously has the potential to greatly increase the scope and effectiveness of message-driven nudging.

For one, using Internet of Things (IoT) ecosystems, cities can provide residents with real-time information so that they may make better-informed decisions. For example, cities could connect traffic sensors to messaging systems and send subscribers text messages at times of high congestion, encouraging them to take public transportation. This real-time information, paired with other nudges, could increase transit use, easing traffic and bettering the environment…
Instantaneous data-mining tools may also prove useful for nudging citizens in real time, at the moments they are most likely to partake in detrimental behavior. Tools like machine learning can analyze users’ behavior and determine if they are likely to make a suboptimal choice, like leaving the website for a city service without enrolling. Using clickstream data, the site could determine if a user is likely to leave and deliver a nudge, for example sending a message explaining that most residents enroll in the service. This strategy provides another layer of nudging, catching residents who may have been influenced by an initial nudge — like a reminder to sign up for a service or streamlined website — but may need an extra prod to follow through….(More)”

Big data helps Belfort, France, allocate buses on routes according to demand


 in Digital Trends: “As modern cities smarten up, the priority for many will be transportation. Belfort, a mid-sized French industrial city of 50,000, serves as proof of concept for improved urban transportation that does not require the time and expense of covering the city with sensors and cameras.

Working with Tata Consultancy Services (TCS) and GFI Informatique, the Board of Public Transportation of Belfort overhauled bus service management of the city’s 100-plus buses. The project entailed a combination of ID cards, GPS-equipped card readers on buses, and big data analysis. The collected data was used to measure bus speed from stop to stop, passenger flow to observe when and where people got on and off, and bus route density. From start to finish, the proof of concept project took four weeks.

Using the TCS Intelligent Urban Exchange system, operations managers were able to detect when and where about 20 percent of all bus passengers boarded and got off on each city bus route. Utilizing big data and artificial intelligence the city’s urban planners were able to use that data analysis to make cost-effective adjustments including the allocation of additional buses on routes and during times of greater passenger demand. They were also able to cut back on buses for minimally used routes and stops. In addition, the system provided feedback on the effect of city construction projects on bus service….

Going forward, continued data analysis will help the city budget wisely for infrastructure changes and new equipment purchases. The goal is to put the money where the needs are greatest rather than just spending and then waiting to see if usage justified the expense. The push for smarter cities has to be not just about improved services, but also smart resource allocation — in the Belfort project, the use of big data showed how to do both….(More)”

Bit By Bit: Social Research in the Digital Age


Open Review of Book by Matthew J. Salganik: “In the summer of 2009, mobile phones were ringing all across Rwanda. In addition to the millions of calls between family, friends, and business associates, about 1,000 Rwandans received a call from Joshua Blumenstock and his colleagues. The researchers were studying wealth and poverty by conducting a survey of people who had been randomly sampled from a database of 1.5 million customers from Rwanda’s largest mobile phone provider. Blumenstock and colleagues asked the participants if they wanted to participate in a survey, explained the nature of the research to them, and then asked a series of questions about their demographic, social, and economic characteristics.

Everything I have said up until now makes this sound like a traditional social science survey. But, what comes next is not traditional, at least not yet. They used the survey data to train a machine learning model to predict someone’s wealth from their call data, and then they used this model to estimate the wealth of all 1.5 million customers. Next, they estimated the place of residence of all 1.5 million customers by using the geographic information embedded in the call logs. Putting these two estimates together—the estimated wealth and the estimated place of residence—Blumenstock and colleagues were able to produce high-resolution estimates of the geographic distribution of wealth across Rwanda. In particular, they could produce an estimated wealth for each of Rwanda’s 2,148 cells, the smallest administrative unit in the country.

It was impossible to validate these estimates because no one had ever produced estimates for such small geographic areas in Rwanda. But, when Blumenstock and colleagues aggregated their estimates to Rwanda’s 30 districts, they found that their estimates were similar to estimates from the Demographic and Health Survey, the gold standard of surveys in developing countries. Although these two approaches produced similar estimates in this case, the approach of Blumenstock and colleagues was about 10 times faster and 50 times cheaper than the traditional Demographic and Health Surveys. These dramatically faster and lower cost estimates create new possibilities for researchers, governments, and companies (Blumenstock, Cadamuro, and On 2015).

In addition to developing a new methodology, this study is kind of like a Rorschach inkblot test; what people see depends on their background. Many social scientists see a new measurement tool that can be used to test theories about economic development. Many data scientists see a cool new machine learning problem. Many business people see a powerful approach for unlocking value in the digital trace data that they have already collected. Many privacy advocates see a scary reminder that we live in a time of mass surveillance. Many policy makers see a way that new technology can help create a better world. In fact, this study is all of those things, and that is why it is a window into the future of social research….(More)”

UK’s Digital Strategy


Executive Summary: “This government’s Plan for Britain is a plan to build a stronger, fairer country that works for everyone, not just the privileged few. …Our digital strategy now develops this further, applying the principles outlined in the Industrial Strategy green paper to the digital economy. The UK has a proud history of digital innovation: from the earliest days of computing to the development of the World Wide Web, the UK has been a cradle for inventions which have changed the world. And from Ada Lovelace – widely recognised as the first computer programmer – to the pioneers of today’s revolution in artificial intelligence, the UK has always been at the forefront of invention. …

Maintaining the UK government as a world leader in serving its citizens online

From personalised services in health, to safer care for the elderly at home, to tailored learning in education and access to culture – digital tools, techniques and technologies give us more opportunities than ever before to improve the vital public services on which we all rely.

The UK is already a world leader in digital government,7 but we want to go further and faster. The new Government Transformation Strategy published on 9 February 2017 sets out our intention to serve the citizens and businesses of the UK with a better, more coherent experience when using government services online – one that meets the raised expectations set by the many other digital services and tools they use every day. So, we will continue to develop single cross-government platform services, including by working towards 25 million GOV.UK Verify users by 2020 and adopting new services onto the government’s GOV.UK Pay and GOV.UK Notify platforms.

We will build on the ‘Government as a Platform’ concept, ensuring we make greater reuse of platforms and components across government. We will also continue to move towards common technology, ensuring that where it is right we are consuming commodity hardware or cloud-based software instead of building something that is needlessly government specific.

We will also continue to work, across government and the public sector, to harness the potential of digital to radically improve the efficiency of our public services – enabling us to provide a better service to citizens and service users at a lower cost. In education, for example, we will address the barriers faced by schools in regions not connected to appropriate digital infrastructure and we will invest in the Network of Teaching Excellence in Computer Science to help teachers and school leaders build their knowledge and understanding of technology. In transport, we will make our infrastructure smarter, more accessible and more convenient for passengers. At Autumn Statement 2016 we announced that the National Productivity Investment Fund would allocate £450 million from 2018-19 to 2020-21 to trial digital signalling technology on the rail network. And in policing, we will enable police officers to use biometric applications to match fingerprint and DNA from scenes of crime and return results including records and alerts to officers over mobile devices at the crime scene.

Read more about digital government.

Unlocking the power of data in the UK economy and improving public confidence in its use

As part of creating the conditions for sustainable growth, we will take the actions needed to make the UK a world-leading data-driven economy, where data fuels economic and social opportunities for everyone, and where people can trust that their data is being used appropriately.

Data is a global commodity and we need to ensure that our businesses can continue to compete and communicate effectively around the world. To maintain our position at the forefront of the data revolution, we will implement the General Data Protection Regulation by May 2018. This will ensure a shared and higher standard of protection for consumers and their data.

Read more about data….(More)”

AI, machine learning and personal data


Jo Pedder at the Information Commissioner’s Office Blog: “Today sees the publication of the ICO’s updated paper on big data and data protection.

But why now? What’s changed in the two and a half years since we first visited this topic? Well, quite a lot actually:

  • big data is becoming the norm for many organisations, using it to profile people and inform their decision-making processes, whether that’s to determine your car insurance premium or to accept/reject your job application;
  • artificial intelligence (AI) is stepping out of the world of science-fiction and into real life, providing the ‘thinking’ power behind virtual personal assistants and smart cars; and
  • machine learning algorithms are discovering patterns in data that traditional data analysis couldn’t hope to find, helping to detect fraud and diagnose diseases.

The complexity and opacity of these types of processing operations mean that it’s often hard to know what’s going on behind the scenes. This can be problematic when personal data is involved, especially when decisions are made that have significant effects on people’s lives. The combination of these factors has led some to call for new regulation of big data, AI and machine learning, to increase transparency and ensure accountability.

In our view though, whilst the means by which the processing of personal data are changing, the underlying issues remain the same. Are people being treated fairly? Are decisions accurate and free from bias? Is there a legal basis for the processing? These are issues that the ICO has been addressing for many years, through oversight of existing European data protection legislation….(More)”