Universities must prepare for a technology-enabled future


 in the Conversation: “Automation and artificial intelligence technologies are transforming manufacturingcorporate work and the retail business, providing new opportunities for companies to explore and posing major threats to those that don’t adapt to the times. Equally daunting challenges confront colleges and universities, but they’ve been slower to acknowledge them.

At present, colleges and universities are most worried about competition from schools or training systems using online learning technology. But that is just one aspect of the technological changes already under way. For example, some companies are moving toward requiring workers have specific skills trainings and certifications – as opposed to college degrees.

As a professor who researches artificial intelligence and offers distance learning courses, I can say that online education is a disruptive challenge for which colleges are ill-prepared. Lack of student demand is already closing 800 out of roughly 10,000 engineering colleges in India. And online learning has put as many as half the colleges and universities in the U.S. at risk of shutting down in the next couple decades as remote students get comparable educations over the internet – without living on campus or taking classes in person. Unless universities move quickly to transform themselves into educational institutions for a technology-assisted future, they risk becoming obsolete….(More)”

A.I. and Big Data Could Power a New War on Poverty


Elisabeth A. Mason in The New York Times: “When it comes to artificial intelligence and jobs, the prognostications are grim. The conventional wisdom is that A.I. might soon put millions of people out of work — that it stands poised to do to clerical and white collar workers over the next two decades what mechanization did to factory workers over the past two. And that is to say nothing of the truckers and taxi drivers who will find themselves unemployed or underemployed as self-driving cars take over our roads.

But it’s time we start thinking about A.I.’s potential benefits for society as well as its drawbacks. The big-data and A.I. revolutions could also help fight poverty and promote economic stability.

Poverty, of course, is a multifaceted phenomenon. But the condition of poverty often entails one or more of these realities: a lack of income (joblessness); a lack of preparedness (education); and a dependency on government services (welfare). A.I. can address all three.

First, even as A.I. threatens to put people out of work, it can simultaneously be used to match them to good middle-class jobs that are going unfilled. Today there are millions of such jobs in the United States. This is precisely the kind of matching problem at which A.I. excels. Likewise, A.I. can predict where the job openings of tomorrow will lie, and which skills and training will be needed for them….

Second, we can bring what is known as differentiated education — based on the idea that students master skills in different ways and at different speeds — to every student in the country. A 2013 study by the National Institutes of Health found that nearly 40 percent of medical students held a strong preference for one mode of learning: Some were listeners; others were visual learners; still others learned best by doing….

Third, a concerted effort to drag education and job training and matching into the 21st century ought to remove the reliance of a substantial portion of the population on government programs designed to assist struggling Americans. With 21st-century technology, we could plausibly reduce the use of government assistance services to levels where they serve the function for which they were originally intended…(More)”.

Democratising the future: How do we build inclusive visions of the future?


Chun-Yin San at Nesta: “In 2011, Lord Martin Rees, the British Astronomer-Royal, launched a scathing critique on the UK Government’s long-term thinking capabilities. “It is depressing,” he argued, “that long-term global issues of energy, food, health and climate get trumped on the political agenda by the short term”. We are facing more and more complex, intergenerational issues like climate change, or the impact of AI, which require long-term, joined-up thinking to solve.

But even when governments do invest in foresight and strategic planning, there is a bigger question around whose vision of the future it is. These strategic plans tend to be written in opaque and complex ways by ‘experts’, with little room for scrutiny, let alone input, by members of the public….

There have been some great examples of more democratic futures exercises in the past. Key amongst them was the Hawai’i 2000 project in the 1970s, which bought together Hawaiians from different walks of life to debate the sort of place that Hawai’i should become over the next 30 years. It generated some incredibly inspiring and creative collective visions of the future of the tropical American state, and also helped embed long-term strategic thinking into policy-making instruments – at least for a time.

A more recent example took place over 2008 in the Dutch Caribbean nation of Aruba, which engaged some 50,000 people from all parts of Aruban society. The Nos Aruba 2025 project allowed the island nation to develop a more sustainable national strategic plan than ever before – one based on what Aruba and its people had to offer, responding to the potential and needs of a diverse community. Like Hawai’i 2000, what followed Nos Aruba 2025 was a fundamental change in the nature of participation in the country’s governance, with community engagement becoming a regular feature in the Aruban government’s work….

These examples demonstrate how futures work is at its best when it is participatory. …However, aside from some of the projects above, examples of genuine engagement in futures remain few and far between. Even when activities examining a community’s future take place in the public domain – such as the Museum of London’s ongoing City Now City Future series – the conversation can often seem one-sided. Expert-generated futures are presented to people with little room for them to challenge these ideas or contribute their own visions in a meaningful way. This has led some, like academics Denis Loveridge and Ozcan Saritas, to remark that futures and foresight can suffer from a serious case of ‘democratic deficit‘.

There are three main reasons for this:

  1. Meaningful participation can be difficult to do, as it is expensive and time-consuming, especially when it comes to large-scale exercises meant to facilitate deep and meaningful dialogue about a community’s future.

  2. Participation is not always valued in the way it should be, and can be met with false sincerity from government sponsors. This is despite the wide-reaching social and economic benefits to building collective future visions, which we are currently exploring further in our work.

  3. Practitioners may not necessarily have the know-how or tools to do citizen engagement effectively. While there are plenty of guides to public engagement and a number of different futures toolkits, there are few openly available resources for participatory futures activities….(More)”

Big Data Challenge for Social Sciences: From Society and Opinion to Replications


Symposium Paper by Dominique Boullier: “When in 2007 Savage and Burrows pointed out ‘the coming crisis of empirical methods’, they were not expecting to be so right. Their paper however became a landmark, signifying the social sciences’ reaction to the tremendous shock triggered by digital methods. As they frankly acknowledge in a more recent paper, they did not even imagine the extent to which their prediction might become true, in an age of Big Data, where sources and models have to be revised in the light of extended computing power and radically innovative mathematical approaches.They signalled not just a debate about academic methods but also a momentum for ‘commercial sociology’ in which platforms acquire the capacity to add ‘another major nail in the coffin of academic sociology claims to jurisdiction over knowledge of the social’, because ‘research methods (are) an intrinsic feature of contemporary capitalist organisations’ (Burrows and Savage, 2014, p. 2). This need for a serious account of research methods is well tuned with the claims of Social Studies of Science that should be applied to the social sciences as well.

I would like to build on these insights and principles of Burrows and Savage to propose an historical and systematic account of quantification during the last century, following in the footsteps of Alain Desrosières, and in which we see Big Data and Machine Learning as a major shift in the way social science can be performed. And since, according to Burrows and Savage (2014, p. 5), ‘the use of new data sources involves a contestation over the social itself’, I will take the risk here of identifying and defining the entities that are supposed to encapsulate the social for each kind of method: beyond the reign of ‘society’ and ‘opinion’, I will point at the emergence of the ‘replications’ that are fabricated by digital platforms but are radically different from previous entities. This is a challenge to invent not only new methods but also a new process of reflexivity for societies, made available by new stakeholders (namely, the digital platforms) which transform reflexivity into reactivity (as operational quantifiers always tend to)….(More)”.

Even Imperfect Algorithms Can Improve the Criminal Justice System


Sam Corbett-Davies, Sharad Goel and Sandra González-Bailón in the The New York Times: “In courtrooms across the country, judges turn to computer algorithms when deciding whether defendants awaiting trial must pay bail or can be released without payment. The increasing use of such algorithms has prompted warnings about the dangers of artificial intelligence. But research shows that algorithms are powerful tools for combating the capricious and biased nature of human decisions.

Bail decisions have traditionally been made by judges relying on intuition and personal preference, in a hasty process that often lasts just a few minutes. In New York City, the strictest judges are more than twice as likely to demand bail as the most lenient ones.

To combat such arbitrariness, judges in some cities now receive algorithmically generated scores that rate a defendant’s risk of skipping trial or committing a violent crime if released. Judges are free to exercise discretion, but algorithms bring a measure of consistency and evenhandedness to the process.

The use of these algorithms often yields immediate and tangible benefits: Jail populations, for example, can decline without adversely affecting public safety.

In one recent experiment, agencies in Virginia were randomly selected to use an algorithm that rated both defendants’ likelihood of skipping trial and their likelihood of being arrested if released. Nearly twice as many defendants were released, and there was no increase in pretrial crime….(More)”.

From Territorial to Functional Sovereignty: The Case of Amazon


Essay by Frank Pasquale: “…Who needs city housing regulators when AirBnB can use data-driven methods to effectively regulate room-letting, then house-letting, and eventually urban planning generally? Why not let Amazon have its own jurisdiction or charter city, or establish special judicial procedures for Foxconn? Some vanguardists of functional sovereignty believe online rating systems could replace state occupational licensure—so rather than having government boards credential workers, a platform like LinkedIn could collect star ratings on them.

In this and later posts, I want to explain how this shift from territorial to functional sovereignty is creating a new digital political economy. Amazon’s rise is instructive. As Lina Khan explains, “the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it.” The “everything store” may seem like just another service in the economy—a virtual mall. But when a firm combines tens of millions of customers with a “marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house…a hardware manufacturer, and a leading host of cloud server space,” as Khan observes, it’s not just another shopping option.

Digital political economy helps us understand how platforms accumulate power. With online platforms, it’s not a simple narrative of “best service wins.” Network effects have been on the cyberlaw (and digital economics) agenda for over twenty years. Amazon’s dominance has exhibited how network effects can be self-reinforcing. The more merchants there are selling on (or to) Amazon, the better shoppers can be assured that they are searching all possible vendors. The more shoppers there are, the more vendors consider Amazon a “must-have” venue. As crowds build on either side of the platform, the middleman becomes ever more indispensable. Oh, sure, a new platform can enter the market—but until it gets access to the 480 million items Amazon sells (often at deep discounts), why should the median consumer defect to it? If I want garbage bags, do I really want to go over to Target.com to re-enter all my credit card details, create a new log-in, read the small print about shipping, and hope that this retailer can negotiate a better deal with Glad? Or do I, ala Sunstein, want a predictive shopping purveyor that intimately knows my past purchase habits, with satisfaction just a click away?
As artificial intelligence improves, the tracking of shopping into the Amazon groove will tend to become ever more rational for both buyers and sellers. Like a path through a forest trod ever clearer of debris, it becomes the natural default. To examine just one of many centripetal forces sucking money, data, and commerce into online behemoths, play out game theoretically how the possibility of online conflict redounds in Amazon’s favor. If you have a problem with a merchant online, do you want to pursue it as a one-off buyer? Or as someone whose reputation has been established over dozens or hundreds of transactions—and someone who can credibly threaten to deny Amazon hundreds or thousands of dollars of revenue each year? The same goes for merchants: The more tribute they can pay to Amazon, the more likely they are to achieve visibility in search results and attention (and perhaps even favor) when disputes come up. What Bruce Schneier said about security is increasingly true of commerce online: You want to be in the good graces of one of the neo-feudal giants who bring order to a lawless realm. Yet few hesitate to think about exactly how the digital lords might use their data advantages against those they ostensibly protect.

Forward-thinking legal thinkers are helping us grasp these dynamics. For example, Rory van Loo has described the status of the “corporation as courthouse”—that is, when platforms like Amazon run dispute resolution schemes to settle conflicts between buyers and sellers. Van Loo describes both the efficiency gains that an Amazon settlement process might have over small claims court, and the potential pitfalls for consumers (such as opaque standards for deciding cases). I believe that, on top of such economic considerations, we may want to consider the political economic origins of e-commerce feudalism. For example, as consumer rights shrivel, it’s rational for buyers to turn to Amazon (rather than overwhelmed small claims courts) to press their case. The evisceration of class actions, the rise of arbitration, boilerplate contracts—all these make the judicial system an increasingly vestigial organ in consumer disputes. Individuals rationally turn to online giants for powers to impose order that libertarian legal doctrine stripped from the state. And in so doing, they reinforce the very dynamics that led to the state’s etiolation in the first place….(More)”.

Accountability of AI Under the Law: The Role of Explanation


Paper by Finale Doshi-Velez and Mason Kortz: “The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before—applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. How can we take advantage of what AI systems have to offer, while also holding them accountable?

In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard….(More)”

Victims of Sexual Harassment Have a New Resource: AI


MIT Technology Review (The Download): “If you have ever dealt with sexual harassment in the workplace, there is now a private online place for you to go for help. Botler AI, a startup based in Montreal, on Wednesday launched a system that provides free information and guidance to those who have been sexually harassed and are unsure of their legal rights.

Using deep learning, the AI system was trained on more than 300,000 U.S. and Canadian criminal court documents, including over 57,000 documents and complaints related to sexual harassment. Using this information, the software predicts whether the situation explained by the user qualifies as sexual harassment, and notes which laws may have been violated under the criminal code. It then generates an incident report that the user can hand over to relevant authorities….

The tool starts by asking simple questions that can guide the software, like what state you live in and when the incident occured. Then, you explain your situation in plain language. The software then creates a report based on that account and what it has learned from the court cases on which it was trained.

The company’s ultimate goal is to provide free legal tools to help with a multitude of issues, not just sexual harassment. In this Botler isn’t alone—a similar company called DoNotPay started as an automated way to fight parking tickets but has since expanded massively (see “This Chatbot Will Help You Sue Anyone“)….(More).

Blockchain: Unpacking the disruptive potential of blockchain technology for human development.


IDRC white paper: “In the scramble to harness new technologies to propel innovation around the world, artificial intelligence, robotics, machine learning, and blockchain technologies are being explored and deployed in a wide variety of contexts globally.

Although blockchain is one of the most hyped of these new technologies, it is also perhaps the least understood. Blockchain is the distributed ledger — a database that is shared across multiple sites or institutions to furnish a secure and transparent record of events occurring during the provision of a service or contract — that supports cryptocurrencies (digital assets designed to work as mediums of exchange).

Blockchain is now underpinning applications such as land registries and identity services, but as its popularity grows, its relevance in addressing socio-economic gaps and supporting development targets like the globally-recognized UN Sustainable Development Goals is critical to unpack. Moreover, for countries in the global South that want to be more than just end users or consumers, the complex infrastructure requirements and operating costs of blockchain could prove challenging. For the purposes of real development, we need to not only understand how blockchain is workable, but also who is able to harness it to foster social inclusion and promote democratic governance.

This white paper explores the potential of blockchain technology to support human development. It provides a non-technical overview, illustrates a range of applications, and offers a series of conclusions and recommendations for additional research and potential development programming….(More)”.

Stewardship in the “Age of Algorithms”


Clifford Lynch at First Monday: “This paper explores pragmatic approaches that might be employed to document the behavior of large, complex socio-technical systems (often today shorthanded as “algorithms”) that centrally involve some mixture of personalization, opaque rules, and machine learning components. Thinking rooted in traditional archival methodology — focusing on the preservation of physical and digital objects, and perhaps the accompanying preservation of their environments to permit subsequent interpretation or performance of the objects — has been a total failure for many reasons, and we must address this problem.

The approaches presented here are clearly imperfect, unproven, labor-intensive, and sensitive to the often hidden factors that the target systems use for decision-making (including personalization of results, where relevant); but they are a place to begin, and their limitations are at least outlined.

Numerous research questions must be explored before we can fully understand the strengths and limitations of what is proposed here. But it represents a way forward. This is essentially the first paper I am aware of which tries to effectively make progress on the stewardship challenges facing our society in the so-called “Age of Algorithms;” the paper concludes with some discussion of the failure to address these challenges to date, and the implications for the roles of archivists as opposed to other players in the broader enterprise of stewardship — that is, the capture of a record of the present and the transmission of this record, and the records bequeathed by the past, into the future. It may well be that we see the emergence of a new group of creators of documentation, perhaps predominantly social scientists and humanists, taking the front lines in dealing with the “Age of Algorithms,” with their materials then destined for our memory organizations to be cared for into the future…(More)”.