Reboot for the AI revolution


Yuval Noah Harari in Nature: “The ongoing artificial-intelligence revolution will change almost every line of work, creating enormous social and economic opportunities — and challenges. Some believe that intelligent computers will push humans out of the job market and create a new ‘useless class’; others maintain that automation will generate a wide range of new human jobs and greater prosperity for all. Almost everybody agrees that we should take action to prevent the worst-case scenarios….

Governments might decide to deliberately slow down the pace of automation, to lessen the resulting shocks and allow time for readjustments. But it will probably be both impossible and undesirable to prevent automation and job loss completely. That would mean giving up the immense positive potential of AI and robotics. If self-driving vehicles drive more safely and cheaply than humans, it would be counterproductive to ban them just to protect the jobs of taxi and lorry drivers.

A more sensible strategy is to create new jobs. In particular, as routine jobs are automated, opportunities for new non-routine jobs will mushroom. For example, general physicians who focus on diagnosing known diseases and administering familiar treatments will probably be replaced by AI doctors. Precisely because of that, there will be more money to pay human experts to do groundbreaking medical research, develop new medications and pioneer innovative surgical techniques.

This calls for economic entrepreneurship and legal dexterity. Above all, it necessitates a revolution in education…Creating new jobs might prove easier than retraining people to fill them. A huge useless class might appear, owing to both an absolute lack of jobs and a lack of relevant education and mental flexibility….

With insights gleaned from early warning signs and test cases, scholars should strive to develop new socio-economic models. The old ones no longer hold. For example, twentieth-century socialism assumed that the working class was crucial to the economy, and socialist thinkers tried to teach the proletariat how to translate its immense economic power into political clout. In the twenty-first century, if the masses lose their economic value they might have to struggle against irrelevance rather than exploitation….The challenges posed in the twenty-first century by the merger of infotech and biotech are arguably bigger than those thrown up by steam engines, railways, electricity and fossil fuels. Given the immense destructive power of our modern civilization, we cannot afford more failed models, world wars and bloody revolutions. We have to do better this time….(More)”

The Supreme Court Is Allergic To Math


 at FiveThirtyEight: “The Supreme Court does not compute. Or at least some of its members would rather not. The justices, the most powerful jurists in the land, seem to have a reluctance — even an allergy — to taking math and statistics seriously.

For decades, the court has struggled with quantitative evidence of all kinds in a wide variety of cases. Sometimes justices ignore this evidence. Sometimes they misinterpret it. And sometimes they cast it aside in order to hold on to more traditional legal arguments. (And, yes, sometimes they also listen to the numbers.) Yet the world itself is becoming more computationally driven, and some of those computations will need to be adjudicated before long. Some major artificial intelligence case will likely come across the court’s desk in the next decade, for example. By voicing an unwillingness to engage with data-driven empiricism, justices — and thus the court — are at risk of making decisions without fully grappling with the evidence.

This problem was on full display earlier this month, when the Supreme Court heard arguments in Gill v. Whitford, a case that will determine the future of partisan gerrymandering — and the contours of American democracy along with it. As my colleague Galen Druke has reported, the case hinges on math: Is there a way to measure a map’s partisan bias and to create a standard for when a gerrymandered map infringes on voters’ rights?…(More)”.

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence


Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.

How Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem


Paper by Amanda Levendowski: “As the use of artificial intelligence (AI) continues to spread, we have seen an increase in examples of AI systems reflecting or exacerbating societal bias, from racist facial recognition to sexist natural language processing. These biases threaten to overshadow AI’s technological gains and potential benefits. While legal and computer science scholars have analyzed many sources of bias, including the unexamined assumptions of its often-homogenous creators, flawed algorithms, and incomplete datasets, the role of the law itself has been largely ignored. Yet just as code and culture play significant roles in how AI agents learn about and act in the world, so too do the laws that govern them. This Article is the first to examine perhaps the most powerful law impacting AI bias: copyright.

Artificial intelligence often learns to “think” by reading, viewing, and listening to copies of human works. This Article first explores the problem of bias through the lens of copyright doctrine, looking at how the law’s exclusion of access to certain copyrighted source materials may create or promote biased AI systems. Copyright law limits bias mitigation techniques, such as testing AI through reverse engineering, algorithmic accountability processes, and competing to convert customers. The rules of copyright law also privilege access to certain works over others, encouraging AI creators to use easily available, legally low-risk sources of data for teaching AI, even when those data are demonstrably biased. Second, it examines how a different part of copyright law — the fair use doctrine — has traditionally been used to address similar concerns in other technological fields, and asks whether it is equally capable of addressing them in the field of AI bias. The Article ultimately concludes that it is, in large part because the normative values embedded within traditional fair use ultimately align with the goals of mitigating AI bias and, quite literally, creating fairer AI systems….(More)”.

How We Can Stop Earthquakes From Killing People Before They Even Hit


Justin Worland in Time Magazine: “…Out of that realization came a plan to reshape disaster management using big data. Just a few months later, Wani worked with two fellow Stanford students to create a platform to predict the toll of natural disasters. The concept is simple but also revolutionary. The One Concern software pulls geological and structural data from a variety of public and private sources and uses machine learning to predict the impact of an earthquake down to individual city blocks and buildings. Real-time information input during an earthquake improves how the system responds. And earthquakes represent just the start for the company, which plans to launch a similar program for floods and eventually other natural disasters….

Previous software might identify a general area where responders could expect damage, but it would appear as a “big red blob” that wasn’t helpful when deciding exactly where to send resources, Dayton says. The technology also integrates information from many sources and makes it easy to parse in an emergency situation when every moment matters. The instant damage evaluations mean fast and actionable information, so first responders can prioritize search and rescue in areas most likely to be worst-hit, rather than responding to 911 calls in the order they are received.

One Concern is not the only company that sees an opportunity to use data to rethink disaster response. The mapping company Esri has built rapid-response software that shows expected damage from disasters like earthquakes, wildfires and hurricanes. And the U.S. government has invested in programs to use data to shape disaster response at agencies like the National Oceanic and Atmospheric Administration (NOAA)….(More)”.

Let’s create a nation of social scientists


Geoff Mulgan in Times Higher Education: “How might social science become more influential, more relevant and more useful in the years to come?

Recent debates about impact have largely assumed a model of social science in which a cadre of specialists, based in universities, analyse and interpret the world and then feed conclusions into an essentially passive society. But a very different view sees specialists in the academy working much more in partnership with a society that is itself skilled in social science, able to generate hypotheses, gather data, experiment and draw conclusions that might help to answer the big questions of our time, from the sources of inequality to social trust, identity to violence.

There are some powerful trends to suggest that this second view is gaining traction. The first of these is the extraordinary explosion of new ways to observe social phenomena. Every day each of us leaves behind a data trail of who we talk to, what we eat and where we go. It’s easier than ever to survey people, to spot patterns, to scrape the web or to pick up data from sensors. It’s easier than ever to gather perceptions and emotions as well as material facts and easier than ever for organisations to practice social science – whether investment organisations analysing market patterns, human resources departments using behavioural science, or local authorities using ethnography.

That deluge of data is a big enough shift on its own. However, it is also now being used to feed interpretive and predictive tools using artificial intelligence to predict who is most likely to go to hospital, to end up in prison, which relationships are most likely to end in divorce.

Governments are developing their own predictive tools, and have also become much more interested in systematic experimentation, with Finland and Canada in the lead,  moving us closer to Karl Popper’s vision of “methods of trial and error, of inventing hypotheses which can be practically tested…”…

The second revolution is less visible but could be no less profound. This is the hunger of many people to be creators of knowledge, not just users; to be part of a truly collective intelligence. At the moment this shift towards mass engagement in knowledge is most visible in neighbouring fields.  Digital humanities mobilise many volunteers to input data and interpret texts – for example making ancient Arabic texts machine-readable. Even more striking is the growth of citizen science – eBird had 1.5 million reports last January; some 1.5 million people in the US monitor river streams and lakes, and SETI@home has 5 million volunteers. Thousands of patients also take part in funding and shaping research on their own conditions….

We’re all familiar with the old idea that it’s better to teach a man to fish than just to give him fish. In essence these trends ask us a simple question: why not apply the same logic to social science, and why not reorient social sciences to enhance the capacity of society itself to observe, analyse and interpret?…(More)”.

UN Opens New Office to Monitor AI Development and Predict Possible Threats


Interesting Engineering: “The United Nations has created a new office in the Netherlands dedicated to the monitoring and research of Artificial Intelligence (AI) technologies. The new office will collect information about the way in which AI is impacting the world. Researchers will have a particular focus on the way AI relates to global security but will also monitor the effects of job loss from AI and automation.

Irakli Beridze, a UN senior strategic adviser will head the office. They have described the new office saying, “A number of UN organisations operate projects involving robots and AI, such as the group of experts studying the role of autonomous military robots in the realm of conventional weapons. These are temporary measures. Ours is the first permanent UN office on this subject. We are looking at the risks as well as the advantages.”….He suggests that the speed of AI technology development is of primary concern. He explains, “This can make for instability if society does not adapt quickly enough. One of our most important tasks is to set up a network of experts from business, knowledge institutes, civil society organisations and governments. We certainly do not want to plead for a ban or a brake on technologies. We will also explore how new technology can contribute to the sustainable development goals of the UN. For this, we want to start concrete projects. We will not be a talking club.”…(More).

Are robots taking our jobs?


Hasan Bakhshi et al at Nesta: “In recent years, there has been an explosion of research into the impacts of automation on work. This makes sense: artificial intelligence and robotics are encroaching on areas of human activity that were simply unimaginable a few years ago.

We ourselves have made contributions to this debate (herehere and here). In The Future of Skills, however, we argue that public dialogues that consider automation alone are dangerous and misleading.

They are dangerous, because popular narratives matter for economic outcomes, and a narrative of relentless technological displacement of labour markets risks chilling innovation and growth, at a time when productivity growth is flagging in developed countries.

They are misleading because there are opportunities for boosting growth – if our education and training systems are agile enough to respond appropriately. However, while there is a burgeoning field of research on the automatability of occupations, there is far less that focuses on skills, and even less that generates actionable insights for stakeholders in areas like job redesign and learning priorities.

There is also a need to recognise that parallel to automation is a set of broader technological, demographic, economic and environmental trends which will have profound implications for employment. In some cases, the trends will reinforce one another; in others, they will produce second-order effects which may be missed when viewed in isolation…..

Skills investment must be at the centre of any long-term strategy for adjusting to structural change. A precondition is access to good quality, transparent analysis of future skills needs, as without it, labour market participants and policymakers risk flying blind. The approach we’ve developed is a step towards improving our understanding of this vital agenda and one that invites a more pro-active reaction than the defensive one that has characterised public discussions on automation in recent years. We’d love to hear your comments….(More).”

Using big data to predict suicide risk among Canadian youth


SAS Insights “Suicide is the second leading cause of death among youth in Canada, according to Statistics Canada, accounting for one-fifth of deaths of people under the age of 25 in 2011. The Canadian Mental Health Association states that among 15 – 24 year olds the number is an even more frightening at 24 percent – the third highest in the industrialized world. Yet despite these disturbing statistics, the signals that an individual plans on self-injury or suicide are hard to isolate….

Team members …collected 2.3 million tweets and used text mining software to identify 1.1 million of them as likely to have been authored by 13 to 17 year olds in Canada by building a machine learning model to predict age, based on the open source PAN author profiling dataset. Their analysis made use of natural language processing, predictive modelling, text mining, and data visualization….

However, there were challenges. Ages are not revealed on Twitter, so the team had to figure out how to tease out the data for 13 – 17 year olds in Canada. “We had a text data set, and we created a model to identify if people were in that age group based on how they talked in their tweets,” Soehl said. “From there, we picked some specific buzzwords and created topics around them, and our software mined those tweets to collect the people.”

Another issue was the restrictions Twitter places on pulling data, though Soehl believes that once this analysis becomes an established solution, Twitter may work with researchers to expedite the process. “Now that we’ve shown it’s possible, there are a lot of places we can go with it,” said Soehl. “Once you know your path and figure out what’s going to be valuable, things come together quickly.”

The team looked at the percentage of people in the group who were talking about depression or suicide, and what they were talking about. Horne said that when SAS’ work went in front of a Canadian audience working in health care, they said that it definitely filled a gap in their data — and that was the validation he’d been looking for. The team also won $10,000 for creating the best answer to this question (the team donated the award money to two mental health charities: Mind Your Mind and Rise Asset Development)

What’s next?

That doesn’t mean the work is done, said Jos Polfliet. “We’re just scraping the surface of what can be done with the information.” Another way to use the results is to look at patterns and trends….(More)”

Advancing Urban Health and Wellbeing Through Collective and Artificial Intelligence: A Systems Approach 3.0


Policy brief by Franz Gatzweiler: “Many problems of urban health and wellbeing, such as pollution, obesity, ageing, mental health, cardiovascular diseases, infectious diseases, inequality and poverty (WHO 2016), are highly complex and beyond the reach of individual problem solving capabilities. Biodiversity loss, climate change, and urban health problems emerge at aggregate scales and are unpredictable. They are the consequence of complex interactions between many individual agents and their environments across urban sectors and scales. Another challenge of complex urban health problems is the knowledge approach we apply to understand and solve them. We are challenged to create a new, innovative knowledge approach to understand and solve the problems of urban health. The positivist approach of separating cause from effect, or observer from observed, is insufficient when human agents are both part of the problemand the solution.

Problems emerging from complexity can only be solved collectively by applying rules which govern complexity. For example, the law of requisite variety (Ashby 1960) tells us that we need as much variety in our problemsolving toolbox as there are different types of problemsto be solved, and we need to address these problems at the respective scale. No individual, hasthe intelligence to solve emergent problems of urban health alone….

  • Complex problems of urban health and wellbeing cause millions of premature deaths annually and are beyond the reach of individual problem-solving capabilities.
  • Collective and artificial intelligence (CI+AI) working together can address the complex challenges of urban health
  • The systems approach (SA) is an adaptive, intelligent and intelligence-creating, “data-metabolic” mechanism for solving such complex challenges
  • Design principles have been identified to successfully create CI and AI. Data metabolic costs are the limiting factor.
  • A call for collaborative action to build an “urban brain” by means of next generation systems approaches is required to save lives in the face of failure to tackle complex urban health challenges….(More)”.