Bot.Me: A revolutionary partnership


PWC Consumer Intelligence Series: “The modern world has been shaped by the technological revolutions of the past, like the Industrial Revolution and the Information Revolution. The former redefined the way the world values both human and material resources; the latter redefined value in terms of resources while democratizing information. Today, as technology progresses even further, value is certain to shift again, with a focus on sentiments more intrinsic to the human experience: thinking, creativity, and problem-solving. AI, shorthand for artificial intelligence, defines technologies emerging today that can understand, learn, and then act based on that information. Forms of AI in use today include digital assistants, chatbots, and machine learning.

Today, AI works in three ways:

  • Assisted intelligence, widely available today, improves what people and organizations are already doing. A simple example, prevalent in cars today, is the GPS navigation program that offers directions to drivers and adjusts to road conditions.
  • Augmented intelligence, emerging today, enables people and organizations to do things they couldn’t otherwise do. For example, the combination of programs that organize cars in ride-sharing services enables businesses that could not otherwise exist.
  • Autonomous intelligence, being developed for the future, establishes machines that act on their own. An example of this will be self-driving vehicles, when they come into widespread use.

With a market projected to reach $70 billion by 2020, AI is poised to have a transformative effect on consumer, enterprise, and government markets around the world. While there are certainly obstacles to overcome, consumers believe that AI has the potential to assist in medical breakthroughs, democratize costly services, elevate poor customer service, and even free up an overburdened workforce. Some tech optimists believe AI could create a world where human abilities are amplified as machines help mankind process, analyze, and evaluate the abundance of data that creates today’s world, allowing humans to spend more time engaged in high-level thinking, creativity, and decision-making. Technological revolutions, like the Industrial Revolution and the Information Revolution, didn’t happen overnight. In fact, people in the midst of those revolutions often didn’t even realize they were happening, until history was recorded later.

That is where we find ourselves today, in the very beginning of what some are calling the “augmented age.” Just like humans in the past, it is up to mankind to find the best ways to leverage these machine revolutions to help the world evolve. As Isaac Asimov, the prolific science fiction writer with many works on AI mused, “No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be.” As a future with AI approaches, it’s important to understand how people think of it today, how it will amplify the world tomorrow, and what guiding principles will be required to navigate this monumental change….(More)”.

Augmented CI and Human-Driven AI: How the Intersection of Artificial Intelligence and Collective Intelligence Could Enhance Their Impact on Society


Blog by Stefaan Verhulst: “As the technology, research and policy communities continue to seek new ways to improve governance and solve public problems, two new types of assets are occupying increasing importance: data and people. Leveraging data and people’s expertise in new ways offers a path forward for smarter decisions, more innovative policymaking, and more accountability in governance. Yet, unlocking the value of these two assets not only requires increased availability and accessibility (through, for instance, open data or open innovation), it also requires innovation in methodology and technology.

The first of these innovations involves Artificial Intelligence (AI). AI offers unprecedented abilities to quickly process vast quantities of data that can provide data-driven insights to address public needs. This is the role it has for example played in New York City, where FireCast, leverages data from across the city government to help the Fire Department identify buildings with the highest fire risks. AI is also considered to improve education, urban transportation,  humanitarian aid and combat corruption, among other sectors and challenges.

The second area is Collective Intelligence (CI). Although it receives less attention than AI, CI offers similar potential breakthroughs in changing how we govern, primarily by creating a means for tapping into the “wisdom of the crowd” and allowing groups to create better solutions than even the smartest experts working in isolation could ever hope to achieve. For example, in several countries patients’ groups are coming together to create new knowledge and health treatments based on their experiences and accumulated expertise. Similarly, scientists are engaging citizens in new ways to tap into their expertise or skills, generating citizen science – ranging from mapping our solar system to manipulating enzyme models in a game-like fashion.

Neither AI nor CI offer panaceas for all our ills; they each pose certain challenges, and even risks.  The effectiveness and accuracy of AI relies substantially on the quality of the underlying data as well as the human-designed algorithms used to analyse that data. Among other challenges, it is becoming increasingly clear how biases against minorities and other vulnerable populations can be built into these algorithms. For instance, some AI-driven platforms for predicting criminal recidivism significantly over-estimate the likelihood that black defendants will commit additional crimes in comparison to white counterparts. (for more examples, see our reading list on algorithmic scrutiny).

In theory, CI avoids some of the risks of bias and exclusion because it is specifically designed to bring more voices into a conversation. But ensuring that that multiplicity of voices adds value, not just noise, can be an operational and ethical challenge. As it stands, identifying the signal in the noise in CI initiatives can be time-consuming and resource intensive, especially for smaller organizations or groups lacking resources or technical skills.

Despite these challenges, however, there exists a significant degree of optimism  surrounding both these new approaches to problem solving. Some of this is hype, but some of it is merited—CI and AI do offer very real potential, and the task facing both policymakers, practitioners and researchers is to find ways of harnessing that potential in a way that maximizes benefits while limiting possible harms.

In what follows, I argue that the solution to the challenge described above may involve a greater interaction between AI and CI. These two areas of innovation have largely evolved and been researched separately until now. However, I believe that there is substantial scope for integration, and mutual reinforcement. It is when harnessed together, as complementary methods and approaches, that AI and CI can bring the full weight of technological progress and modern data analytics to bear on our most complex, pressing problems.

To deconstruct that statement, I propose three premises (and subsequent set of research questions) toward establishing a necessary research agenda on the intersection of AI and CI that can build more inclusive and effective approaches to governance innovation.

Premise I: Toward Augmented Collective Intelligence: AI will enable CI to scale

Premise II: Toward Human-Driven Artificial Intelligence: CI will humanize AI

Premise III: Open Governance will drive a blurring between AI and CI

…(More)”.

Democracy Needs a Reboot for the Age of Artificial Intelligence


Katharine Dempsey at The Nation: “…A healthy modern democracy requires ordinary citizens to participate in public discussions about rapidly advancing technologies. We desperately need new policies, regulations, and safety nets for those displaced by machines. With computing power accelerating exponentially, the scale of AI’s significance is still not being fully internalized. The 2017 McKinsey Global Initiative report “A Future that Works” predicts that AI and advanced robotics could automate roughly half of all work globally by 2055, but, McKinsey notes, “this could happen up to 20 years earlier or later depending on the various factors, in addition to other wider economic conditions.”

Granted, the media are producing more articles focused on artificial intelligence, but too often these pieces veer into hysterics. Wired magazine labeled this year’s coverage “The Great Tech Panic of 2017.” We need less fear-mongering and more rational conversation. Dystopian narratives, while entertaining, can also be disorienting. Skynet from the Terminatormovies is not imminent. But that doesn’t mean there aren’t hazards ahead….

Increasingly, to thoughtfully discuss ethics, politics, or business, the general population needs to pay attention to AI. In 1989, Ursula Franklin, the distinguished German-Canadian experimental physicist, delivered a series of lectures titled “The Real World of Technology.” Franklin opened her lectures with an important observation: “The viability of technology, like democracy, depends in the end on the practice of justice and on the enforcements of limits to power.”

For Franklin, technology is not a neutral set of tools; it can’t be divorced from society or values. Franklin further warned that “prescriptive technologies”—ones that isolate tasks, such as factory-style work—find their way into our social infrastructures and create modes of compliance and orthodoxy. These technologies facilitate top-down control….(More)”.

The Human Strategy


A Video Conversation with Sandy Pentland at Edge: “People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren’t?

That begins to sound like a society or a company. We all live in a human social network. We’re reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what’s the right way to do that? Is it a safe idea? Is it completely crazy?…(More)”.

Growing the artificial intelligence industry in the UK


Summary from an independent review, carried out by Professor Dame Wendy Hall and Jérôme Pesenti: “Increased use of Artificial Intelligence (AI) can bring major social and economic benefits to the UK. With AI, computers can analyse and learn from information at higher accuracy and speed than humans can. AI offers massive gains in efficiency and performance to most or all industry sectors, from drug discovery to logistics. AI is software that can be integrated into existing processes, improving them, scaling them, and reducing their costs, by making or suggesting more accurate decisions through better use of information.

It has been estimated that AI could add an additional USD $814 billion (£630bn) to the UK economy by 2035, increasing the annual growth rate of GVA from 2.5 to 3.9%.

Our vision is for the UK to become the best place in the world for businesses developing and deploying AI to start, grow and thrive, to realise all the benefits the technology offers….

Key factors have combined to increase the capability of AI in recent years, in particular:

  • New and larger volumes of data
  • Supply of experts with the specific high level skills
  • Availability of increasingly powerful computing capacity. The barriers to achieving performance have fallen significantly, and continue to fall.

To continue developing and applying AI, the UK will need to increase ease of access to data in a wider range of sectors. This Review recommends:

  • Development of data trusts, to improve trust and ease around sharing data
  • Making more research data machine readable
  • Supporting text and data mining as a standard and essential tool for research.

Skilled experts are needed to develop AI, and they are in short supply. To develop more AI, the UK will need a larger workforce with deep AI expertise, and more development of lower level skills to work with AI. …

Increasing uptake of AI means increasing demand as well as supply through a better understanding of what AI can do and where it could be applied. This review recommends:

  • An AI Council to promote growth and coordination in the sector
  • Guidance on how to explain decisions and processes enabled by AI
  • Support for export and inward investment
  • Guidance on successfully applying AI to drive improvements in industry
  • A programme to support public sector use of AI
  • Funded challenges around data held by public organisations.

Our work has indicated that action in these areas could deliver a step-change improvement in growth of UK AI. This report makes the 18 recommendations listed in full below, which describe how Government, industry and academia should work together to keep the UK among the world leaders in AI…(More)”

Linux Foundation Debuts Community Data License Agreement


Press Release: “The Linux Foundation, the nonprofit advancing professional open source management for mass collaboration, today announced the Community Data License Agreement(CDLA) family of open data agreements. In an era of expansive and often underused data, the CDLA licenses are an effort to define a licensing framework to support collaborative communities built around curating and sharing “open” data.

Inspired by the collaborative software development models of open source software, the CDLA licenses are designed to enable individuals and organizations of all types to share data as easily as they currently share open source software code. Soundly drafted licensing models can help people form communities to assemble, curate and maintain vast amounts of data, measured in petabytes and exabytes, to bring new value to communities of all types, to build new business opportunities and to power new applications that promise to enhance safety and services.

The growth of big data analytics, machine learning and artificial intelligence (AI) technologies has allowed people to extract unprecedented levels of insight from data. Now the challenge is to assemble the critical mass of data for those tools to analyze. The CDLA licenses are designed to help governments, academic institutions, businesses and other organizations open up and share data, with the goal of creating communities that curate and share data openly.

For instance, if automakers, suppliers and civil infrastructure services can share data, they may be able to improve safety, decrease energy consumption and improve predictive maintenance. Self-driving cars are heavily dependent on AI systems for navigation, and need massive volumes of data to function properly. Once on the road, they can generate nearly a gigabyte of data every second. For the average car, that means two petabytes of sensor, audio, video and other data each year.

Similarly, climate modeling can integrate measurements captured by government agencies with simulation data from other organizations and then use machine learning systems to look for patterns in the information. It’s estimated that a single model can yield a petabyte of data, a volume that challenges standard computer algorithms, but is useful for machine learning systems. This knowledge may help improve agriculture or aid in studying extreme weather patterns.

And if government agencies share aggregated data on building permits, school enrollment figures, sewer and water usage, their citizens benefit from the ability of commercial entities to anticipate their future needs and respond with infrastructure and facilities that arrive in anticipation of citizens’ demands.

“An open data license is essential for the frictionless sharing of the data that powers both critical technologies and societal benefits,” said Jim Zemlin, Executive Director of The Linux Foundation. “The success of open source software provides a powerful example of what can be accomplished when people come together around a resource and advance it for the common good. The CDLA licenses are a key step in that direction and will encourage the continued growth of applications and infrastructure.”…(More)”.

Reboot for the AI revolution


Yuval Noah Harari in Nature: “The ongoing artificial-intelligence revolution will change almost every line of work, creating enormous social and economic opportunities — and challenges. Some believe that intelligent computers will push humans out of the job market and create a new ‘useless class’; others maintain that automation will generate a wide range of new human jobs and greater prosperity for all. Almost everybody agrees that we should take action to prevent the worst-case scenarios….

Governments might decide to deliberately slow down the pace of automation, to lessen the resulting shocks and allow time for readjustments. But it will probably be both impossible and undesirable to prevent automation and job loss completely. That would mean giving up the immense positive potential of AI and robotics. If self-driving vehicles drive more safely and cheaply than humans, it would be counterproductive to ban them just to protect the jobs of taxi and lorry drivers.

A more sensible strategy is to create new jobs. In particular, as routine jobs are automated, opportunities for new non-routine jobs will mushroom. For example, general physicians who focus on diagnosing known diseases and administering familiar treatments will probably be replaced by AI doctors. Precisely because of that, there will be more money to pay human experts to do groundbreaking medical research, develop new medications and pioneer innovative surgical techniques.

This calls for economic entrepreneurship and legal dexterity. Above all, it necessitates a revolution in education…Creating new jobs might prove easier than retraining people to fill them. A huge useless class might appear, owing to both an absolute lack of jobs and a lack of relevant education and mental flexibility….

With insights gleaned from early warning signs and test cases, scholars should strive to develop new socio-economic models. The old ones no longer hold. For example, twentieth-century socialism assumed that the working class was crucial to the economy, and socialist thinkers tried to teach the proletariat how to translate its immense economic power into political clout. In the twenty-first century, if the masses lose their economic value they might have to struggle against irrelevance rather than exploitation….The challenges posed in the twenty-first century by the merger of infotech and biotech are arguably bigger than those thrown up by steam engines, railways, electricity and fossil fuels. Given the immense destructive power of our modern civilization, we cannot afford more failed models, world wars and bloody revolutions. We have to do better this time….(More)”

The Supreme Court Is Allergic To Math


 at FiveThirtyEight: “The Supreme Court does not compute. Or at least some of its members would rather not. The justices, the most powerful jurists in the land, seem to have a reluctance — even an allergy — to taking math and statistics seriously.

For decades, the court has struggled with quantitative evidence of all kinds in a wide variety of cases. Sometimes justices ignore this evidence. Sometimes they misinterpret it. And sometimes they cast it aside in order to hold on to more traditional legal arguments. (And, yes, sometimes they also listen to the numbers.) Yet the world itself is becoming more computationally driven, and some of those computations will need to be adjudicated before long. Some major artificial intelligence case will likely come across the court’s desk in the next decade, for example. By voicing an unwillingness to engage with data-driven empiricism, justices — and thus the court — are at risk of making decisions without fully grappling with the evidence.

This problem was on full display earlier this month, when the Supreme Court heard arguments in Gill v. Whitford, a case that will determine the future of partisan gerrymandering — and the contours of American democracy along with it. As my colleague Galen Druke has reported, the case hinges on math: Is there a way to measure a map’s partisan bias and to create a standard for when a gerrymandered map infringes on voters’ rights?…(More)”.

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence


Dom Galeon in Futurism: “As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios….

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions….(More)”.