White House Challenges Artificial Intelligence Experts to Reduce Incarceration Rates


Jason Shueh at GovTech: “The U.S. spends $270 billion on incarceration each year, has a prison population of about 2.2 million and an incarceration rate that’s spiked 220 percent since the 1980s. But with the advent of data science, White House officials are asking experts for help.

On Tuesday, June 7, the White House Office of Science and Technology Policy’s Lynn Overmann, who also leads the White House Police Data Initiative, stressed the severity of the nation’s incarceration crisis while asking a crowd of data scientists and artificial intelligence specialists for aid.

“We have built a system that is too large, and too unfair and too costly — in every sense of the word — and we need to start to change it,” Overmann said, speaking at a Computing Community Consortium public workshop.

She argued that the U.S., a country that has the highest amount incarcerated citizens in the world, is in need of systematic reforms with both data tools to process alleged offenders and at the policy level to ensure fair and measured sentences. As a longtime counselor, advisor and analyst for the Justice Department and at the city and state levels, Overman said she has studied and witnessed an alarming number of issues in terms of bias and unwarranted punishments.

For instance, she said that statistically, while drug use is about equal between African-Americans and Caucasians, African-Americans are more likely to be arrested and convicted. They also receive longer prison sentences compared to Caucasian inmates convicted of the same crimes….

Data and digital tools can help curb such pitfalls by increasing efficiency, transparency and accountability, she said.

“We think these types of data exchanges [between officials and technologists] can actually be hugely impactful if we can figure out how to take this information and operationalize it for the folks who run these systems,” Obermann noted.

The opportunities to apply artificial intelligence and data analytics, she said, might include using it to improve questions on parole screenings, using it to analyze police body camera footage, and applying it to criminal justice data for legislators and policy workers….

If the private sector is any indication, artificial intelligence and machine learning techniques could be used to interpret this new and vast supply of law enforcement data. In an earlier presentation by Eric Horvitz, the managing director at Microsoft Research, Horvitz showcased how the company has applied artificial intelligence to vision and language to interpret live video content for the blind. The app, titled SeeingAI, can translate live video footage, captured from an iPhone or a pair of smart glasses, into instant audio messages for the seeing impaired. Twitter’s live-streaming app Periscope has employed similar technology to guide users to the right content….(More)”

Fan Favorites


Erin Reilly at Strategy + Business: “…In theory, new technological advances such as big data and machine learning, combined with more direct access to audience sentiment, behaviors, and preferences via social media and over-the-top delivery channels, give the entertainment and media industry unprecedented insight into what the audience actually wants. But as a professional in the television industry put it, “We’re drowning in data and starving for insights.” Just as my data trail didn’t trace an accurate picture of my true interest in soccer, no data set can quantify all that consumers are as humans. At USC’s Annenberg Innovation Lab, our research has led us to an approach that blends data collection with a deep understanding of the social and cultural context in which the data is created. This can be a powerful practice for helping researchers understand the behavior of fans — fans of sports, brands, celebrities, and shows.

A Model for Understanding Fans

Marketers and creatives often see audiences and customers as passive assemblies of listeners or spectators. But we believe it’s more useful to view them as active participants. The best analogy may be fans. Broadly characterized, fans have a continued connection with the property they are passionate about. Some are willing to declare their affinity through engagement, some have an eagerness to learn more about their passion, and some want to connect with others who share their interests. Fans are emotionally linked to the object of their passion, and experience their passion through their own subjective lenses. We all start out as audience members. But sometimes, when the combination of factors aligns in just the right way, we become engaged as fans.

For businesses, the key to building this engagement and solidifying the relationship is understanding the different types of fan motivations in different contexts, and learning how to turn the data gathered about them into actionable insights. Even if Jane Smith and her best friend are fans of the same show, the same team, or the same brand, they’re likely passionate for different reasons. For example, some viewers may watch the ABC melodrama Scandal because they’re fashionistas and can’t wait to see the newest wardrobe of star Kerry Washington; others may do so because they’re obsessed with politics and want to see how the newly introduced Donald Trump–like character will behave. And those differences mean fans will respond in varied ways to different situations and content.
Though traditional demographics may give us basic information about who fans are and where they’re located, current methods of understanding and measuring engagement are missing the answers to two essential questions: (1) Why is a fan motivated? and (2) What triggers the fan’s behavior? Our Innovation Lab research group is developing a new model called Leveraging Engagement, which can be used as a framework when designing media strategy….(More)”

Big Crisis Data: Social Media in Disasters and Time-Critical Situations


Book by Carlos Castillo: “Social media is an invaluable source of time-critical information during a crisis. However, emergency response and humanitarian relief organizations that would like to use this information struggle with an avalanche of social media messages that exceeds human capacity to process. Emergency managers, decision makers, and affected communities can make sense of social media through a combination of machine computation and human compassion – expressed by thousands of digital volunteers who publish, process, and summarize potentially life-saving information. This book brings together computational methods from many disciplines: natural language processing, semantic technologies, data mining, machine learning, network analysis, human-computer interaction, and information visualization, focusing on methods that are commonly used for processing social media messages under time-critical constraints, and offering more than 500 references to in-depth information…(More)”

AI lawyer speeds up legal research


Springwise: “Lawyers have to maintain and recall vast amounts of information in the form of legislation, case law and secondary cases, and they spend up to a fifth of their time on legal research. But an AI app called Ross Intelligence could soon help with that. The program, which is built on IBM’s super-computer Watson, uses natural language processing to answer legal questions in a fraction of the time that it would take a legal assistant.

To begin, legal professionals can ask Ross a question as they would ask a colleague. Then the program reads through the entire body of law and returns a cited answer as well topical readings. Ross also monitors the law constantly to keep the user updated about changes that might affect their case, so they don’t need to sift through the mass of legal news….(More)”

Transparency reports make AI decision-making accountable


Phys.org: “Machine-learning algorithms increasingly make decisions about credit, medical diagnoses, personalized recommendations, advertising and job opportunities, among other things, but exactly how usually remains a mystery. Now, new measurement methods developed by Carnegie Mellon University researchers could provide important insights to this process.

 Was it a person’s age, gender or education level that had the most influence on a decision? Was it a particular combination of factors? CMU’s Quantitative Input Influence (QII) measures can provide the relative weight of each factor in the final decision, said Anupam Datta, associate professor of computer science and electrical and computer engineering.

“Demands for algorithmic transparency are increasing as the use of algorithmic decision-making systems grows and as people realize the potential of these systems to introduce or perpetuate racial or sex discrimination or other social harms,” Datta said.

“Some companies are already beginning to provide transparency reports, but work on the computational foundations for these reports has been limited,” he continued. “Our goal was to develop measures of the degree of influence of each factor considered by a system, which could be used to generate transparency reports.”

These reports might be generated in response to a particular incident—why an individual’s loan application was rejected, or why police targeted an individual for scrutiny or what prompted a particular medical diagnosis or treatment. Or they might be used proactively by an organization to see if an artificial intelligence system is working as desired, or by a regulatory agency to see whether a decision-making system inappropriately discriminated between groups of people….(More)”

Robot Regulators Could Eliminate Human Error


 in the San Francisco Chronicle and Regblog: “Long a fixture of science fiction, artificial intelligence is now part of our daily lives, even if we do not realize it. Through the use of sophisticated machine learning algorithms, for example, computers now work to filter out spam messages automatically from our email. Algorithms also identify us by our photos on Facebook, match us with new friends on online dating sites, and suggest movies to watch on Netflix.

These uses of artificial intelligence hardly seem very troublesome. But should we worry if government agencies start to use machine learning?

Complaints abound even today about the uncaring “bureaucratic machinery” of government. Yet seeing how machine learning is starting to replace jobs in the private sector, we can easily fathom a literal machinery of government in which decisions made by human public servants increasingly become made by machines.

Technologists warn of an impending “singularity,” when artificial intelligence surpasses human intelligence. Entrepreneur Elon Musk cautions that artificial intelligence poses one of our “biggest existential threats.” Renowned physicist Stephen Hawking eerily forecasts that artificial intelligence might even “spell the end of the human race.”

Are we ready for a world of regulation by robot? Such a world is closer than we think—and it could actually be worth welcoming.

Already government agencies rely on machine learning for a variety of routine functions. The Postal Service uses learning algorithms to sort mail, and cities such as Los Angeles use them to time their traffic lights. But while uses like these seem relatively benign, consider that machine learning could also be used to make more consequential decisions. Disability claims might one day be processed automatically with the aid of artificial intelligence. Licenses could be awarded to airplane pilots based on what kinds of safety risks complex algorithms predict each applicant poses.

Learning algorithms are already being explored by the Environmental Protection Agency to help make regulatory decisions about what toxic chemicals to control. Faced with tens of thousands of new chemicals that could potentially be harmful to human health, federal regulators have supported the development of a program to prioritize which of the many chemicals in production should undergo the more in-depth testing. By some estimates, machine learning could save the EPA up to $980,000 per toxic chemical positively identified.

It’s not hard then to imagine a day in which even more regulatory decisions are automated. Researchers have shown that machine learning can lead to better outcomes when determining whether parolees ought to be released or domestic violence orders should be imposed. Could the imposition of regulatory fines one day be determined by a computer instead of a human inspector or judge? Quite possibly so, and this would be a good thing if machine learning could improve accuracy, eliminate bias and prejudice, and reduce human error, all while saving money.

But can we trust a government that bungled the initial rollout of Healthcare.gov to deploy artificial intelligence responsibly? In some circumstances we should….(More)”

The Biggest Hope for Ending Corruption Is Open Public Contracting


Gavin Hayman at the Huffington Post: “This week the British Prime Minister David Cameron is hosting an international anti-corruption summit. The scourge of anonymous shell companies and hidden identities rightly seizes the public’s imagination. We can all picture the suitcases of cash and tropical islands involved. As well as acting on offshore and onshore money laundering havens, world leaders at the summit should also be asking themselves where all this money is being stolen from in the first place.

The answer is mostly from public contracting: government spending through private companies to deliver works, goods and services to citizens. It is technical, dull and universally obscure. But it is the single biggest item of spending by government – amounting to a staggering $9,500,000,000,000 each year. This concentration of money, government discretion, and secrecy makes public contracting so vulnerable to corruption. Data on prosecutions tracked by the OECD Anti-Bribery Convention shows that roughly 60% of bribes were paid to win public contracts.

Corruption in contracting deprives ordinary people of vital goods and services, and sometimes even kills: I was one of many Londoners moved by Ai Wei Wei’s installation that memorialised the names of thousands of children killed in China’s Sichuan earthquake in 2008. Their supposed earthquake-proof schools collapsed on them like tofu.

Beyond corruption, inefficiency and mismanagement of public contracts cost countries billions. Governments just don’t seem to know what they are buying, when, from whom, and whether they got a good price.

This problem can be fixed. But it will require a set of innovations best described as open contracting: using accessible open data and better engagement so that citizens, government and business can follow the money in government contracts from planning to tendering to performance and closure. The coordination required can be hard work but it is achievable: any country can make substantial progress on open contracting with some political leadership. My organisation supports an open data standard and a free global helpdesk to assist governments, civil society, and business in this transition….(More)”

Accountable Algorithms


Paper by Joshua A. Kroll et al: “Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for an IRS audit, and grant or deny immigration visas.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the complexity of code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities — more subtle and flexible than total transparency — to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of algorithms, but also — in certain cases — the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterwards.

The technological tools introduced in this Article apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability….(More)”

Fairness in Machine Learning


Presentation by Delip Rao: “…The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission… the list goes on.

So what can you do about it?…

I have detailed notes for some of these slides. If you would like to follow those, try going directly to Google Slides.

 

Innovation and Its Enemies: Why People Resist New Technologies


]Book by Calestous Juma: “The rise of artificial intelligence has rekindled a long-standing debate regarding the impact of technology on employment. This is just one of many areas where exponential advances in technology signal both hope and fear, leading to public controversy. This book shows that many debates over new technologies are framed in the context of risks to moral values, human health, and environmental safety. But it argues that behind these legitimate concerns often lie deeper, but unacknowledged, socioeconomic considerations. Technological tensions are often heightened by perceptions that the benefits of new technologies will accrue only to small sections of society while the risks will be more widely distributed. Similarly, innovations that threaten to alter cultural identities tend to generate intense social concern. As such, societies that exhibit great economic and political inequities are likely to experience heightened technological controversies.

Drawing from nearly 600 years of technology history, Innovation and Its Enemies identifies the tension between the need for innovation and the pressure to maintain continuity, social order, and stability as one of today’s biggest policy challenges. It reveals the extent to which modern technological controversies grow out of distrust in public and private institutions. Using detailed case studies of coffee, the printing press, margarine, farm mechanization, electricity, mechanical refrigeration, recorded music, transgenic crops, and transgenic animals, it shows how new technologies emerge, take root, and create new institutional ecologies that favor their establishment in the marketplace. The book uses these lessons from history to contextualize contemporary debates surrounding technologies such as artificial intelligence, online learning, 3D printing, gene editing, robotics, drones, and renewable energy. It ultimately makes the case for shifting greater responsibility to public leaders to work with scientists, engineers, and entrepreneurs to manage technological change, make associated institutional adjustments, and expand public engagement on scientific and technological matters….(More)”