An AI That Reads Privacy Policies So That You Don’t Have To


Andy Greenberg at Wired: “…Today, researchers at Switzerland’s Federal Institute of Technology at Lausanne (EPFL), the University of Wisconsin and the University of Michigan announced the release of Polisis—short for “privacy policy analysis”—a new website and browser extension that uses their machine-learning-trained app to automatically read and make sense of any online service’s privacy policy, so you don’t have to.

In about 30 seconds, Polisis can read a privacy policy it’s never seen before and extract a readable summary, displayed in a graphic flow chart, of what kind of data a service collects, where that data could be sent, and whether a user can opt out of that collection or sharing. Polisis’ creators have also built a chat interface they call Pribot that’s designed to answer questions about any privacy policy, intended as a sort of privacy-focused paralegal advisor. Together, the researchers hope those tools can unlock the secrets of how tech firms use your data that have long been hidden in plain sight….

Polisis isn’t actually the first attempt to use machine learning to pull human-readable information out of privacy policies. Both Carnegie Mellon University and Columbia have made their own attempts at similar projects in recent years, points out NYU Law Professor Florencia Marotta-Wurgler, who has focused her own research on user interactions with terms of service contracts online. (One of her own studies showed that only .07 percent of users actually click on a terms of service link before clicking “agree.”) The Usable Privacy Policy Project, a collaboration that includes both Columbia and CMU, released its own automated tool to annotate privacy policies just last month. But Marotta-Wurgler notes that Polisis’ visual and chat-bot interfaces haven’t been tried before, and says the latest project is also more detailed in how it defines different kinds of data. “The granularity is really nice,” Marotta-Wurgler says. “It’s a way of communicating this information that’s more interactive.”…(More)”.

Artificial Intelligence and Foreign Policy


Paper by Ben ScottStefan Heumann and Philppe Lorenz: “The plot-lines of the development of Artificial Intelligence (AI) are debated and contested. But it is safe to predict that it will become one of the central technologies of the 21st century. It is fashionable these days to speak about data as the new oil. But if we want to “refine” the vast quantities of data we are collecting today and make sense of it, we will need potent AI. The consequences of the AI revolution could not be more far reaching. Value chains will be turned upside down, labor markets will get disrupted and economic power will shift to those who control this new technology. And as AI is deeply embedded in the connectivity of the Internet, the challenge of AI is global in nature. Therefore it is striking that AI is almost absent from the foreign policy agenda.

This paper seeks to provide a foundation for planning a foreign policy strategy that responds effectively to the emerging power of AI in international affairs. The developments in AI are so dynamic and the implications so wide-ranging that ministries need to begin engaging immediately. That means starting with the assets and resources at hand while planning for more significant changes in the future. Many of the tools of traditional diplomacy can be adapted to this new field. While the existing toolkit can get us started, this pragmatic approach does not preclude thinking about more drastic changes that the technological changes might require for our foreign policy institutions and instruments.

The paper approaches this challenge, drawing on the existing foreign policy toolbox and reflecting on the past lessons of adapting this toolbox to the Internet revolution. The paper goes on to make suggestions on how the tools could be applied to the international challenges that the AI revolution will bring about. The toolbox includes policy making, public diplomacy, bilateral and multilateral engagement, actions through international and treaty organizations, convenings and partnerships, grant-making and information-gathering and analysis. The analysis of the international challenges of the AI transformation are divided into three topical areas. Each of the three sections includes concrete suggestions how instruments from the tool box could be applied to address the challenges AI will bring about in international affairs….(More)“.

Artificial intelligence and privacy


Report by the The Norwegian Data Protection Authority (DPA): “…If people cannot trust that information about them is being handled properly, it may limit their willingness to share information – for example with their doctor, or on social media. If we find ourselves in a situation in which sections of the population refuse to share information because they feel that their personal integrity is being violated, we will be faced with major challenges to our freedom of speech and to people’s trust in the authorities.

A refusal to share personal information will also represent a considerable challenge with regard to the commercial use of such data in sectors such as the media, retail trade and finance services.

About the report

This report elaborates on the legal opinions and the technologies described in the 2014 report «Big Data – privacy principles under pressure». In this report we will provide greater technical detail in describing artificial intelligence (AI), while also taking a closer look at four relevant AI challenges associated with the data protection principles embodied in the GDPR:

  • Fairness and discrimination
  • Purpose limitation
  • Data minimisation
  • Transparency and the right to information

This represents a selection of data protection concerns that in our opinion are most relevance for the use of AI today.

The target group for this report consists of people who work with, or who for other reasons are interested in, artificial intelligence. We hope that engineers, social scientists, lawyers and other specialists will find this report useful….(More) (Download Report)”.

Earth Observation Open Science and Innovation


Open Access book edited by Pierre-Philippe Mathieu and Christoph Aubrecht: “Over  the  past  decades,  rapid developments in digital and sensing technologies, such  as the Cloud, Web and Internet of Things, have dramatically changed the way we live and work. The digital transformation is revolutionizing our ability to monitor our planet and transforming the  way we access, process and exploit Earth Observation data from satellites.

This book reviews these megatrends and their implications for the Earth Observation community as well as the wider data economy. It provides insight into new paradigms of Open Science and Innovation applied to space data, which are characterized by openness, access to large volume of complex data, wide availability of new community tools, new techniques for big data analytics such as Artificial Intelligence, unprecedented level of computing power, and new types of collaboration among researchers, innovators, entrepreneurs and citizen scientists. In addition, this book aims to provide readers with some reflections on the future of Earth Observation, highlighting through a series of use cases not just the new opportunities created by the New Space revolution, but also the new challenges that must be addressed in order to make the most of the large volume of complex and diverse data delivered by the new generation of satellites….(More)”.

How AI Could Help the Public Sector


Emma Martinho-Truswell in the Harvard Business Review: “A public school teacher grading papers faster is a small example of the wide-ranging benefits that artificial intelligence could bring to the public sector. A.I could be used to make government agencies more efficient, to improve the job satisfaction of public servants, and to increase the quality of services offered. Talent and motivation are wasted doing routine tasks when they could be doing more creative ones.

Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world. In addition to education, public servants are using AI to help them make welfare payments and immigration decisions, detect fraud, plan new infrastructure projects, answer citizen queries, adjudicate bail hearings, triage health care cases, and establish drone paths.  The decisions we are making now will shape the impact of artificial intelligence on these and other government functions. Which tasks will be handed over to machines? And how should governments spend the labor time saved by artificial intelligence?

So far, the most promising applications of artificial intelligence use machine learning, in which a computer program learns and improves its own answers to a question by creating and iterating algorithms from a collection of data. This data is often in enormous quantities and from many sources, and a machine learning algorithm can find new connections among data that humans might not have expected. IBM’s Watson, for example, is a treatment recommendation-bot, sometimes finding treatments that human doctors might not have considered or known about.

Machine learning program may be better, cheaper, faster, or more accurate than humans at tasks that involve lots of data, complicated calculations, or repetitive tasks with clear rules. Those in public service, and in many other big organizations, may recognize part of their job in that description. The very fact that government workers are often following a set of rules — a policy or set of procedures — already presents many opportunities for automation.

To be useful, a machine learning program does not need to be better than a human in every case. In my work, we expect that much of the “low hanging fruit” of government use of machine learning will be as a first line of analysis or decision-making. Human judgment will then be critical to interpret results, manage harder cases, or hear appeals.

When the work of public servants can be done in less time, a government might reduce its staff numbers, and return money saved to taxpayers — and I am sure that some governments will pursue that option. But it’s not necessarily the one I would recommend. Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity — all things at which humans continue to outperform even the most sophisticated AI program….(More)”.

After Big Data: The Coming Age of “Big Indicators”


Andrew Zolli at the Stanford Social Innovation Review: “Consider, for a moment, some of the most pernicious challenges facing humanity today: the increasing prevalence of natural disasters; the systemic overfishing of the world’s oceans; the clear-cutting of primeval forests; the maddening persistence of poverty; and above all, the accelerating effects of global climate change.

Each item in this dark litany inflicts suffering on the world in its own, awful way. Yet as a group, they share some common characteristics. Each problem is messy, with lots of moving parts. Each is riddled with perverse incentives, which can lead local actors to behave in a way that is not in the common interest. Each is opaque, with dynamics that are only partially understood, even by experts; each can, as a result, often be made worse by seemingly rational and well-intentioned interventions. When things do go wrong, each has consequences that diverge dramatically from our day-to-day experiences, making their full effects hard to imagine, predict, and rehearse. And each is global in scale, raising questions about who has the legal obligation to act—and creating incentives for leaders to disavow responsibility (and sometimes even question the legitimacy of the problem itself).

With dynamics like these, it’s little wonder systems theorists label these kinds of problems “wicked” or even “super wicked.” It’s even less surprising that these challenges remain, by and large, externalities to the global system—inadequately measured, perennially underinvested in, and poorly accounted for—until their consequences spill disastrously and expensively into view.

For real progress to occur, we’ve got to move these externalities into the global system, so that we can fully assess their costs, and so that we can sufficiently incentivize and reward stakeholders for addressing them and penalize them if they don’t. And that’s going to require a revolution in measurement, reporting, and financial instrumentation—the mechanisms by which we connect global problems with the resources required to address them at scale.

Thankfully, just such a revolution is under way.

It’s a complex story with several moving parts, but it begins with important new technical developments in three critical areas of technology: remote sensing and big data, artificial intelligence, and cloud computing.

Remote sensing and big data allow us to collect unprecedented streams of observations about our planet and our impacts upon it, and dramatic advances in AI enable us to extract the deeper meaning and patterns contained in those vast data streams. The rise of the cloud empowers anyone with an Internet connection to access and interact with these insights, at a fraction of the traditional cost.

In the years to come, these technologies will shift much of the current conversation focused on big data to one focused on “big indicators”—highly detailed, continuously produced, global indicators that track change in the health of the Earth’s most important systems, in real time. Big indicators will form an important mechanism for guiding human action, allow us to track the impact of our collective actions and interventions as never before, enable better and more timely decisions, transform reporting, and empower new kinds of policy and financing instruments. In short, they will reshape how we tackle a number of global problems, and everyone—especially nonprofits, NGOs, and actors within the social and environmental sectors—will play a role in shaping and using them….(More)”.

Improving refugee integration through data-driven algorithmic assignment


Kirk Bansak, et al in Science Magazine: “Developed democracies are settling an increased number of refugees, many of whom face challenges integrating into host societies. We developed a flexible data-driven algorithm that assigns refugees across resettlement locations to improve integration outcomes. The algorithm uses a combination of supervised machine learning and optimal matching to discover and leverage synergies between refugee characteristics and resettlement sites.

The algorithm was tested on historical registry data from two countries with different assignment regimes and refugee populations, the United States and Switzerland. Our approach led to gains of roughly 40 to 70%, on average, in refugees’ employment outcomes relative to current assignment practices. This approach can provide governments with a practical and cost-efficient policy tool that can be immediately implemented within existing institutional structures….(More)”.

Extracting crowd intelligence from pervasive and social big data


Introduction by Leye Wang, Vincent Gauthier, Guanling Chen and Luis Moreira-Matias of Special Issue of the Journal of Ambient Intelligence and Humanized Computing: “With the prevalence of ubiquitous computing devices (smartphones, wearable devices, etc.) and social network services (Facebook, Twitter, etc.), humans are generating massive digital traces continuously in their daily life. Considering the invaluable crowd intelligence residing in these pervasive and social big data, a spectrum of opportunities is emerging to enable promising smart applications for easing individual life, increasing company profit, as well as facilitating city development. However, the nature of big data also poses fundamental challenges on the techniques and applications relying on the pervasive and social big data from multiple perspectives such as algorithm effectiveness, computation speed, energy efficiency, user privacy, server security, data heterogeneity and system scalability. This special issue presents the state-of-the-art research achievements in addressing these challenges. After the rigorous review process of reviewers and guest editors, eight papers were accepted as follows.

The first paper “Automated recognition of hypertension through overnight continuous HRV monitoring” by Ni et al. proposes a non-invasive way to differentiate hypertension patients from healthy people with the pervasive sensors such as a waist belt. To this end, the authors train a machine learning model based on the heart rate data sensed from waists worn by a crowd of people, and the experiments show that the detection accuracy is around 93%.

The second paper “The workforce analyzer: group discovery among LinkedIn public profiles” by Dai et al. describes two users’ group discovery methods among LinkedIn public profiles. One is based on K-means and another is based on SVM. The authors contrast results of both methods and provide insights about the trending professional orientations of the workforce from an online perspective.

The third paper “Tweet and followee personalized recommendations based on knowledge graphs” by Pla Karidi et al. present an efficient semantic recommendation method that helps users filter the Twitter stream for interesting content. The foundation of this method is a knowledge graph that can represent all user topics of interest as a variety of concepts, objects, events, persons, entities, locations and the relations between them. An important advantage of the authors’ method is that it reduces the effects of problems such as over-recommendation and over-specialization.

The fourth paper “CrowdTravel: scenic spot profiling by using heterogeneous crowdsourced data” by Guo et al. proposes CrowdTravel, a multi-source social media data fusion approach for multi-aspect tourism information perception, which can provide travelling assistance for tourists by crowd intelligence mining. Experiments over a dataset of several popular scenic spots in Beijing and Xi’an, China, indicate that the authors’ approach attains fine-grained characterization for the scenic spots and delivers excellent performance.

The fifth paper “Internet of Things based activity surveillance of defence personnel” by Bhatia et al. presents a comprehensive IoT-based framework for analyzing national integrity of defence personnel with consideration to his/her daily activities. Specifically, Integrity Index Value is defined for every defence personnel based on different social engagements, and activities for detecting the vulnerability to national security. In addition to this, a probabilistic decision tree based automated decision making is presented to aid defence officials in analyzing various activities of a defence personnel for his/her integrity assessment.

The sixth paper “Recommending property with short days-on-market for estate agency” by Mou et al. proposes an estate with short days-on-market appraisal framework to automatically recommend those estates using transaction data and profile information crawled from websites. Both the spatial and temporal characteristics of an estate are integrated into the framework. The results show that the proposed framework can estimate accurately about 78% estates.

The seventh paper “An anonymous data reporting strategy with ensuring incentives for mobile crowd-sensing” by Li et al. proposes a system and a strategy to ensure anonymous data reporting while ensuring incentives simultaneously. The proposed protocol is arranged in five stages that mainly leverage three concepts: (1) slot reservation based on shuffle, (2) data submission based on bulk transfer and multi-player dc-nets, and (3) incentive mechanism based on blind signature.

The last paper “Semantic place prediction from crowd-sensed mobile phone data” by Celik et al. semantically classifes places visited by smart phone users utilizing the data collected from sensors and wireless interfaces available on the phones as well as phone usage patterns, such as battery level, and time-related information, with machine learning algorithms. For this study, the authors collect data from 15 participants at Galatasaray University for 1 month, and try different classification algorithms such as decision tree, random forest, k-nearest neighbour, naive Bayes, and multi-layer perceptron….(More)”.

The Potential for Human-Computer Interaction and Behavioral Science


Article by Kweku Opoku-Agyemang as  part of a special issue by Behavioral Scientist on “Connected State of Mind,” which explores the impact of tech use on our behavior and relationships (complete issue here):

A few days ago, one of my best friends texted me a joke. It was funny, so a few seconds later I replied with the “laughing-while-crying emoji.” A little yellow smiley face with tear drops perched on its eyes captured exactly what I wanted to convey to my friend. No words needed. If this exchange happened ten years ago, we would have emailed each other. Two decades ago, snail mail.

As more of our interactions and experiences are mediated by screens and technology, the way we relate to one another and our world is changing. Posting your favorite emoji may seem superficial, but such reflexes are becoming critical for understanding humanity in the 21st century.

Seemingly ubiquitous computer interfaces—on our phones and laptops, not to mention our cars, coffee makers, thermostats, and washing machines—are blurring the lines between our connected and our unconnected selves. And it’s these relationships, between users and their computers, which define the field of human–computer interaction (HCI). HCI is based on the following premise: The more we understand about human behavior, the better we can design computer interfaces that suit people’s needs.

For instance, HCI researchers are designing tactile emoticons embedded in the Braille system for individuals with visual impairments. They’re also creating smartphones that can almost read your mind—predicting when and where your finger is about to touch them next.

Understanding human behavior is essential for designing human-computer interfaces. But there’s more to it than that: Understanding how people interact with computer interfaces can help us understand human behavior in general.

One of the insights that propelled behavioral science into the DNA of so many disciplines was the idea that we are not fully rational: We procrastinate, forget, break our promises, and change our minds. What most behavioral scientists might not realize is that as they transcended rationality, rational models found a new home in artificial intelligence. Much of A.I. is based on the familiar rational theories that dominated the field of economics prior to the rise of behavioral economics. However, one way to better understand how to apply A.I. in high-stakes scenarios, like self-driving cars, may be to embrace ways of thinking that are less rational.

It’s time for information and computer science to join forces with behavioral science. The mere presence of a camera phone can alter our cognition even when switched off, so if we ignore HCI in behavioral research in a world of constant clicks, avatars, emojis, and now animojis we limit our understanding of human behavior.

Below I’ve outlined three very different cases that would benefit from HCI researchers and behavioral scientists working together: technology in the developing world, video games and the labor market, and online trolling and bullying….(More)”.

Advanced Design for the Public Sector


Essay by Kristofer Kelly-Frere & Jonathan Veale: “…It might surprise some, but it is now common for governments across Canada to employ in-house designers to work on very complex and public issues.

There are design teams giving shape to experiences, services, processes, programs, infrastructure and policies. The Alberta CoLab, the Ontario Digital Service, BC’s Government Digital Experience Division, the Canadian Digital Service, Calgary’s Civic Innovation YYC, and, in partnership with government,MaRS Solutions Lab stand out. The Government of Nova Scotia recently launched the NS CoLab. There are many, many more. Perhaps hundreds.

Design-thinking. Service Design. Systemic Design. Strategic Design. They are part of the same story. Connected by their ability to focus and shape a transformation of some kind. Each is an advanced form of design oriented directly at humanizing legacy systems — massive services built by a culture that increasingly appears out-of-sorts with our world. We don’t need a new design pantheon, we need a unifying force.

We have no shortage of systems that require reform. And no shortage of challenges. Among them, the inability to assemble a common understanding of the problems in the first place, and then a lack of agency over these unwieldy systems. We have fanatics and nativists who believe in simple, regressive and violent solutions. We have a social economy that elevates these marginal voices. We have well-vested interests who benefit from maintaining the status quo and who lack actionable migration paths to new models. The median public may no longer see themselves in liberal democracy. Populism and dogmatism is rampant. The government, in some spheres, is not credible or trusted.

The traditional designer’s niche is narrowing at the same time government itself is becoming fragile. It is already cliche to point out that private wealth and resources allow broad segments of the population to “opt out.” This is quite apparent at the municipal level where privatized sources of security, water, fire protection and even sidewalks effectively produce private shadow governments. Scaling up, the most wealthy may simply purchase residency or citizenship or invest in emerging nation states. Without re-invention this erosion will continue. At the same time artificial intelligence, machine learning and automation are already displacing frontline design and creative work. This is the opportunity: Building systems awareness and agency on the foundations of craft and empathy that are core to human centered design. Time is of the essence. Transitions between one era to the next are historically tumultuous times. Moreover, these changes proceed faster than expected and in unexpected directions….(More).