Understanding the four types of AI, from reactive robots to self-aware beings


 at The Conversation: “…We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world….

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel…;

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it….

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences….(More)”

AI Ethics: The Future of Humanity 


Report by sparks & honey: “Through our interaction with machines, we develop emotional, human expectations of them. Alexa, for example, comes alive when we speak with it. AI is and will be a representation of its cultural context, the values and ethics we apply to one another as humans.

This machinery is eerily familiar as it mirrors us, and eventually becomes even smarter than us mere mortals. We’re programming its advantages based on how we see ourselves and the world around us, and we’re doing this at an incredible pace. This shift is pervading culture from our perceptions of beauty and aesthetics to how we interact with one another – and our AI.

Infused with technology, we’re asking: what does it mean to be human?

Our report examines:

• The evolution of our empathy from humans to animals and robots
• How we treat AI in its infancy like we do a child, allowing it space to grow
• The spectrum of our emotional comfort in a world embracing AI
• The cultural contexts fueling AI biases, such as gender stereotypes, that drive the direction of AI
• How we place an innate trust in machines, more than we do one another (Download for free)”

 

Power to the People: Addressing Big Data Challenges in Neuroscience by Creating a New Cadre of Citizen Neuroscientists


Jane Roskams and Zoran Popović in Neuron: “Global neuroscience projects are producing big data at an unprecedented rate that informatic and artificial intelligence (AI) analytics simply cannot handle. Online games, like Foldit, Eterna, and Eyewire—and now a new neuroscience game, Mozak—are fueling a people-powered research science (PPRS) revolution, creating a global community of “new experts” that over time synergize with computational efforts to accelerate scientific progress, empowering us to use our collective cerebral talents to drive our understanding of our brain….(More)”

Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective


 et al at Peer J. Computer Science: “Recent advances in Natural Language Processing and Machine Learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. This can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. This paper presents the first systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content. We formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. Textual information is represented using contiguous word sequences, i.e., N-grams, and topics. Our models can predict the court’s decisions with a strong accuracy (79% on average). Our empirical analysis indicates that the formal facts of a case are the most important predictive factor. This is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. We also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis….(More)”

Artificial Intelligence can streamline public comment for federal agencies


John Davis at the Hill: “…What became immediately clear to me was that — although not impossible to overcome — the lack of consistency and shared best practices across all federal agencies in accepting and reviewing public comments was a serious impediment. The promise of Natural Language Processing and cognitive computing to make the public comment process light years faster and more transparent becomes that much more difficult without a consensus among federal agencies on what type of data is collected – and how.

“There is a whole bunch of work we have to do around getting government to be more customer friendly and making it at least as easy to file your taxes as it is to order a pizza or buy an airline ticket,” President Obama recently said in an interview with WIRED. “Whether it’s encouraging people to vote or dislodging Big Data so that people can use it more easily, or getting their forms processed online more simply — there’s a huge amount of work to drag the federal government and state governments and local governments into the 21st century.”

…expanding the discussion around Artificial Intelligence and regulatory processes to include how the technology should be leveraged to ensure fairness and responsiveness in the very basic processes of rulemaking – in particular public notices and comments. These technologies could also enable us to consider not just public comments formally submitted to an agency, but the entire universe of statements made through social media posts, blogs, chat boards — and conceivably every other electronic channel of public communication.

Obviously, an anonymous comment on the Internet should not carry the same credibility as a formally submitted, personally signed statement, just as sworn testimony in court holds far greater weight than a grapevine rumor. But so much public discussion today occurs on Facebook pages, in Tweets, on news website comment sections, etc. Anonymous speech enjoys explicit protection under the Constitution, based on a justified expectation that certain sincere statements of sentiment might result in unfair retribution from the government.

Should we simply ignore the valuable insights about actual public sentiment on specific issues made possible through the power of Artificial Intelligence, which can ascertain meaning from an otherwise unfathomable ocean of relevant public conversations? With certain qualifications, I believe Artificial Intelligence, or AI, should absolutely be employed in the critical effort to gain insights from public comments – signed or anonymous.

“In the criminal justice system, some of the biggest concerns with Big Data are the lack of data and the lack of quality data,” the NSTC report authors state. “AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.” As a former federal criminal prosecutor and defense attorney, I am well familiar with the absolute necessity to weigh the relative value of various forms of evidence – or in this case, data…(More)

Crowdsourcing Gun Violence Research


Penn Engineering: “Gun violence is often described as an epidemic, but as visible and shocking as shooting incidents are, epidemiologists who study that particular source of mortality have a hard time tracking them. The Centers for Disease Control is prohibited by federal law from conducting gun violence research, so there is little in the way of centralized infrastructure to monitor where, how,when, why and to whom shootings occur.

Chris Callison-Burch, Aravind K.Joshi Term Assistant Professor in Computer and InformationScience, and graduate studentEllie Pavlick are working to solve this problem.

They have developed the GunViolence Database, which combines machine learning and crowdsourcing techniques to produce a national registry of shooting incidents. Callison-Burch and Pavlick’s algorithm scans thousands of articles from local newspaper and television stations,determines which are about gun violence, then asks everyday people to pullout vital statistics from those articles, compiling that information into a unified, open database.

For natural language processing experts like Callison-Burch and Pavlick, the most exciting prospect of this effort is that it is training computer systems to do this kind of analysis automatically. They recently presented their work on that front at Bloomberg’s Data for Good Exchange conference.

The Gun Violence Database project started in 2014, when it became the centerpiece of Callison-Burch’s “Crowdsourcing and Human Computation”class. There, Pavlick developed a series of homework assignments that challenged undergraduates to develop a classifier that could tell whether a given news article was about a shooting incident.

“It allowed us to teach the things we want students to learn about datascience and natural language processing, while giving them the motivation to do a project that could contribute to the greater good,” says Callison-Burch.

The articles students used to train their classifiers were sourced from “TheGun Report,” a daily blog from New York Times reporters that attempted to catalog shootings from around the country in the wake of the Sandy Hook massacre. Realizing that their algorithmic approach could be scaled up to automate what the Times’ reporters were attempting, the researchers began exploring how such a database could work. They consulted with DouglasWiebe, a Associate Professor of Epidemiology in Biostatistics andEpidemiology in the Perelman School of Medicine, to learn more about what kind of information public health researchers needed to better study gun violence on a societal scale.

From there, the researchers enlisted people to annotate the articles their classifier found, connecting with them through Mechanical Turk, Amazon’scrowdsourcing platform, and their own website, http://gun-violence.org/…(More)”

The Promise of Artificial Intelligence: 70 Real-World Examples


Report by the Information Technology & Innovation Foundation: “Artificial intelligence (AI) is on a winning streak. In 2005, five teams successfully completed the DARPA Grand Challenge, a competition held by the Defense Advanced Research Projects Agency to spur development of autonomous vehicles. In 2011, IBM’s Watson system beat out two longtime human champions to win Jeopardy! In 2016, Google DeepMind’s AlphaGo system defeated the 18-time world-champion Go player. And thanks to Apple’s Siri, Microsoft’s Cortana, Google’s Google Assistant, and Amazon’s Alexa, consumers now have easy access to a variety of AI-powered virtual assistants to help manage their daily lives. The potential uses of AI to identify patterns, learn from experience, and find novel solutions to new challenges continue to grow as the technology advances.

Moreover, AI is already having a major positive impact in many different sectors of the global economy and society.  For example, humanitarian organizations are using intelligent chatbots to provide psychological support to Syrian refugees, and doctors are using AI to develop personalized treatments for cancer patients. Unfortunately, the benefits of AI, as well as its likely impact in the years ahead, are vastly underappreciated by policymakers and the public. Moreover, a contrary narrative—that AI raises grave concerns and warrants a precautionary regulatory approach to limit the damages it could cause—has gained prominence, even though it is both wrong and harmful to societal progress.

To showcase the overwhelmingly positive impact of AI, this report describes the major uses of AI and highlights 70 real-world examples of how AI is already generating social and economic benefits. Policymakers should consider these benefits as they evaluate the steps they can take to support the development and adoption of AI….(More)”

Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity


Book by James Hendler and Alice Mulvehill: “Will your next doctor be a human being—or a machine? Will you have a choice? If you do, what should you know before making it?

This book introduces the reader to the pitfalls and promises of artificial intelligence in its modern incarnation and the growing trend of systems to “reach off the Web” into the real world. The convergence of AI, social networking, and modern computing is creating an historic inflection point in the partnership between human beings and machines with potentially profound impacts on the future not only of computing but of our world.

AI experts and researchers James Hendler and Alice Mulvehill explore the social implications of AI systems in the context of a close examination of the technologies that make them possible. The authors critically evaluate the utopian claims and dystopian counterclaims of prognosticators. Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity is your richly illustrated field guide to the future of your machine-mediated relationships with other human beings and with increasingly intelligent machines.

What you’ll learn

• What the concept of a social machine is and how the activities of non-programmers are contributing to machine intelligence• How modern artificial intelligence technologies, such as Watson, are evolving and how they process knowledge from both carefully produced information (such as Wikipedia or journal articles) and from big data collections

• The fundamentals of neuromorphic computing

• The fundamentals of knowledge graph search and linked data as well as the basic technology concepts that underlie networking applications such as Facebook and Twitter

• How the change in attitudes towards cooperative work on the Web, especially in the younger demographic, is critical to the future of Web applications…(More)”

Combining Satellite Imagery and Machine Learning to Predict Poverty


From the sustainability and artificial intelligence lab: “The elimination of poverty worldwide is the first of 17 UN Sustainable Development Goals for the year 2030. To track progress towards this goal, we require more frequent and more reliable data on the distribution of poverty than traditional data collection methods can provide.

In this project, we propose an approach that combines machine learning with high-resolution satellite imagery to provide new data on socioeconomic indicators of poverty and wealth. Check out the short video below for a quick overview and then read the paper for a more detailed explanation of how it all works….(More)”

Artificial intelligence is hard to see


Kate Crawford and Meredith Whittaker on “Why we urgently need to measure AI’s societal impacts“: “How will artificial intelligence systems change the way we live? This is a tough question: on one hand, AI tools are producing compelling advances in complex tasks, with dramatic improvements in energy consumption, audio processing, and leukemia detection. There is extraordinary potential to do much more in the future. On the other hand, AI systems are already making problematic judgements that are producing significant social, cultural, and economic impacts in people’s everyday lives.

AI and decision-support systems are embedded in a wide array of social institutions, from influencing who is released from jail to shaping the news we see. For example, Facebook’s automated content editing system recently censored the Pulitzer-prize winning image of a nine-year old girl fleeing napalm bombs during the Vietnam War. The girl is naked; to an image processing algorithm, this might appear as a simple violation of the policy against child nudity. But to human eyes, Nick Ut’s photograph, “The Terror of War”, means much more: it is an iconic portrait of the indiscriminate horror of conflict, and it has an assured place in the history of photography and international politics. The removal of the image caused an international outcry before Facebook backed down and restored the image. “What they do by removing such images, no matter what good intentions, is to redact our shared history,” said the Prime Minister of Norway, Erna Solberg.

It’s easy to forget that these high-profile instances are actually the easy cases. As Tarleton Gillespie has observed, hundreds of content reviews are occurring with Facebook images thousand of times per day, and rarely is there a Pulitzer prize to help determine lasting significance. Some of these reviews include human teams, and some do not. In this case, there is alsoconsiderable ambiguity about where the automated process ended and the human review began: which is part of the problem. And Facebook is just one player in complex ecology of algorithmically-supplemented determinations with little external monitoring to see how decisions are made or what the effects might be.

The ‘Terror of War’ case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these ‘weak AI’ systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that take enormous effort to detect.

Sometimes AI techniques get it right, and sometimes they get it wrong. Only rarely will those errors be seen by the public: like the Vietnam war photograph, or when a AI ‘beauty contest’ held this month was called out for being racist for selecting white women as the winners. We can dismiss this latter case as a problem of training data — they simply need a more diverse selection of faces to train their algorithm with, and now that 600,000 people have sent in their selfies, they certainly have better means to do so. But while a beauty contest might seem like a bad joke, or just a really good trick to get people to give up their photos to build a large training data set, it points to a much bigger set of problems. AI and decision-support systems are reaching into everyday life: determining who will be on a predictive policing‘heat list’, who will be hired or promoted, which students will be recruited to universities, or seeking to predict at birth who will become a criminal by the age of 18. So the stakes are high…(More)”