How AI-Driven Insurance Could Reduce Gun Violence


Jason Pontin at WIRED: “As a political issue, guns have become part of America’s endless, arid culture wars, where Red and Blue tribes skirmish for political and cultural advantage. But what if there were a compromise? Economics and machine learning suggest an answer, potentially acceptable to Americans in both camps.

Economists sometimes talk about “negative externalities,” market failures where the full costs of transactions are borne by third parties. Pollution is an externality, because society bears the costs of environmental degradation. The 20th-century British economist Arthur Pigou, who formally described externalities, also proposed their solution: so-called “Pigovian taxes,” where governments charge producers or customers, reducing the quantity of the offending products and sometimes paying for ameliorative measures. Pigovian taxes have been used to fight cigarette smoking or improve air quality, and are the favorite prescription of economists for reducing greenhouse gases. But they don’t work perfectly, because it’s hard for governments to estimate the costs of externalities.

Gun violence is a negative externality too. The choices of millions of Americans to buy guns overflow into uncaptured costs for society in the form of crimes, suicides, murders, and mass shootings. A flat gun tax would be a blunt instrument: It could only reduce gun violence by raising the costs of gun ownership so high that almost no one could legally own a gun, which would swell the black market for guns and probably increase crime. But insurers are very good at estimating the risks and liabilities of individual choices; insurance could capture the externalities of gun violence in a smarter, more responsive fashion.

Here’s the proposed compromise: States should require gun owners to be licensed and pay insurance, just as car owners must be licensed and insured today….

The actuaries who research risk have always considered a wide variety of factors when helping insurers price the cost of a policy. Car, home, and life insurance can vary according to a policy holder’s age, health, criminal record, employment, residence, and many other variables. But in recent years, machine learning and data analytics have provided actuaries with new predictive powers. According to Yann LeCun, the director of artificial intelligence at Facebook and the primary inventor of an important technique in deep learning called convolution, “Deep learning systems provide better statistical models with enough data. They can be advantageously applied to risk evaluation, and convolutional neural nets can be very good at prediction, because they can take into account a long window of past values.”

State Farm, Liberty Mutual, Allstate, and Progressive Insurance have all used algorithms to improve their predictive analysis and to more accurately distribute risk among their policy holders. For instance, in late 2015, Progressive created a telematics app called Snapshot that individual drivers used to collect information on their driving. In the subsequent two years, 14 billion miles of driving data were collected all over the country and analyzed on Progressive’s machine learning platform, H20.ai, resulting in discounts of $600 million for their policy holders. On average, machine learning produced a $130 discount for Progressive customers.

When the financial writer John Wasik popularized gun insurance in a series of posts in Forbes in 2012 and 2013, the NRA’s argument about prior constraints was a reasonable objection. Wasik proposed charging different rates to different types of gun owners, but there were too many factors that would have to be tracked over too long a period to drive down costs for low-risk policy holders. Today, using deep learning, the idea is more practical: Insurers could measure the interaction of dozens or hundreds of factors, predicting the risks of gun ownership and controlling costs for low-risk gun owners. Other, more risky bets might pay more. Some very risky would-be gun owners might be unable to find insurance at all. Gun insurance could even be dynamically priced, changing as the conditions of the policy holders’ lives altered, and the gun owners proved themselves better or worse risks.

Requiring gun owners to buy insurance wouldn’t eliminate gun violence in America. But a political solution to the problem of gun violence is chimerical….(More)”.

Data-Driven Regulation and Governance in Smart Cities


Chapter by Sofia Ranchordas and Abram Klop in Berlee, V. Mak, E. Tjong Tjin Tai (Eds), Research Handbook on Data Science and Law (Edward Elgar, 2018): “This paper discusses the concept of data-driven regulation and governance in the context of smart cities by describing how these urban centres harness these technologies to collect and process information about citizens, traffic, urban planning or waste production. It describes how several smart cities throughout the world currently employ data science, big data, AI, Internet of Things (‘IoT’), and predictive analytics to improve the efficiency of their services and decision-making.

Furthermore, this paper analyses the legal challenges of employing these technologies to influence or determine the content of local regulation and governance. It explores in particular three specific challenges: the disconnect between traditional administrative law frameworks and data-driven regulation and governance, the effects of the privatization of public services and citizen needs due to the growing outsourcing of smart cities technologies to private companies; and the limited transparency and accountability that characterizes data-driven administrative processes. This paper draws on a review of interdisciplinary literature on smart cities and offers illustrations of data-driven regulation and governance practices from different jurisdictions….(More)”.

Prediction, Judgment and Complexity


NBER Working Paper by Agrawal, Ajay and Gans, Joshua S. and Goldfarb, Avi: “We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries….(More)”.

No One Owns Data


Paper by Lothar Determann: “Businesses, policy makers, and scholars are calling for property rights in data. They currently focus particularly on the vast amounts of data generated by connected cars, industrial machines, artificial intelligence, toys and other devices on the Internet of Things (IoT). This data is personal to numerous parties who are associated with a connected device, for example, the driver of a connected car, its owner and passengers, as well as other traffic participants. Manufacturers, dealers, independent providers of auto parts and services, insurance companies, law enforcement agencies and many others are also interested in this data. Various parties are actively staking their claims to data on the Internet of Things, as they are mining data, the fuel of the digital economy.

Stakeholders in digital markets often frame claims, negotiations and controversies regarding data access as one of ownership. Businesses regularly assert and demand that they own data. Individual data subjects also assume that they own data about themselves. Policy makers and scholars focus on how to redistribute ownership rights to data. Yet, upon closer review, it is very questionable whether data is—or should be—subject to any property rights. This article unambiguously answers the question in the negative, both with respect to existing law and future lawmaking, in the United States as in the European Union, jurisdictions with notably divergent attitudes to privacy, property and individual freedoms….

The article begins with a brief review of the current landscape of the Internet of Things notes explosive growth of data pools generated by connected devices, artificial intelligence, big data analytics tools and other information technologies. Part 1 lays the foundation for examining concrete current legal and policy challenges in the remainder of the article. Part 2 supplies conceptual differentiation and definitions with respect to “data” and “information” as the subject of rights and interests. Distinctions and definitional clarity serve as the basis for examining the purposes and reach of existing property laws in Part 3, including real property, personal property and intellectual property laws. Part 4 analyzes the effect of data-related laws that do not grant property rights. Part 5 examines how the interests of the various stakeholders are protected or impaired by the current framework of data-related laws to identify potential gaps that could warrant additional property rights. Part 6 examines policy considerations for and against property rights in data. Part 7 concludes that no one owns data and no one should own data….(More)”.

Quality of life, big data and the power of statistics


Paper by Shivam Gupta in Statistics & Probability Letters: “Quality of life (QoL) is tied to the perception of ‘meaning’. The quest for meaning is central to the human condition, and we are brought in touch with a sense of meaning when we reflect on what we have created, loved, believed in or left as a legacy (Barcaccia, 2013). QoL is associated with multi-dimensional issues and features such as environmental pressure, total water management, total waste management, noise and level of air pollution (Eusuf et al., 2014). A significant amount of data is needed to understand all these dimensions. Such knowledge is necessary to realize the vision of a smart city, which involves the use of data-driven approaches to improve the quality of life of the inhabitants and city infrastructures (Degbelo et al., 2016).

Technologies such as Radio-Frequency Identification (RFID) or the Internet of Things (IoT) are producing a large volume of data. Koh et al. (2015) pointed out that approximately 2.5 quintillion bytes of data are generated every day, and 90 percent of the data in the world has been created in the past two years alone. Managing this large amount of data, and analyzing it efficiently can help making more informed decisions while solving many of the societal challenges (e.g., exposure analysis, disaster preparedness, climate change). As discussed in Goodchild (2016), the attractiveness of big data can be summarized in one word, namely spatial prediction – the prediction of both the where and when.

This article focuses on the 5Vs of big data (volume, velocity, variety, value, veracity). The challenges associated with big data in the context of environmental monitoring at a city level are briefly presented in Section 2. Section 3 discusses the use of statistical methods like Land Use Regression (LUR) and Spatial Simulated Annealing (SSA) as two promising ways of addressing the challenges of big data….(More)”.

Open data sharing and the Global South—Who benefits?


David Serwadda et al in Science: “A growing number of government agencies, funding organizations, and publishers are endorsing the call for increased data sharing, especially in biomedical research, many with an ultimate goal of open data. Open data is among the least restrictive forms of data sharing, in contrast to managed access mechanisms, which typically have terms of use and in some cases oversight by the data generators themselves. But despite an ethically sound rationale and growing support for open data sharing in many parts of the world, concerns remain, particularly among researchers in low- and middle-income countries (LMICs) in Africa, Latin America, and parts of Asia and the Middle East that comprise the Global South. Drawing on our perspective as researchers and ethicists working in the Global South, we see opportunities to improve community engagement, raise awareness, and build capacity, all toward improving research and data sharing involving researchers in LMICs…African scientists have expressed concern that open data compromises national ownership and reopens the gates for “parachute-research” (i.e., Northern researchers absconding with data to their home countries). Other LMIC researchers have articulated fears over free-riding scientists using the data collected by others for their own career advancement …(More)”

A primer on political bots: Part one


Stuart W. Shulman et al at Data Driven Journalism: “The rise of political bots brings into sharp focus the role of automated social media accounts in today’s democratic civil society. Events during the Brexit referendum and the 2016 U.S. Presidential election revealed the scale of this issue for the first time to the majority of citizens and policy-makers. At the same time, the deployment of Russian-linked bots designed to promote pro-gun laws in the aftermath of the Florida school shooting demonstrates the state-sponsored, real-time readiness to shape, through information warfare, the dominant narratives on platforms such as Twitter. The regular news reports on these issues lead us to conclude that the foundations of democracy have become threatened by the presence of aggressive and socially disruptive bots, which aim to manipulate online political discourse.

While there is clarity on the various functions that bot accounts can be scripted to perform, as described below, the task of accurately defining this phenomenon and identifying bot accounts remains a challenge. At Texifter, we have endeavoured to bring nuance to this issue through a research project which explores the presence of automated accounts on Twitter. Initially, this project concerned itself with an attempt to identify bots which participated in online conversations around the prevailing cryptocurrency phenomenon. This article is the first in a series of three blog posts produced by the researchers at Texifter that outlines the contemporary phenomenon of Twitter bots….

Bots in their current iteration have a relatively short, albeit rapidly evolving history. Initially constructed with non-malicious intentions, it wasn’t until the late 1990s with the advent of Web 2.0 when bots began to develop a more negative reputation. Although bots have been used maliciously in denial-of-service (DDoS) attacks, spam emails, and mass identity theft, their purpose is not explicitly to incite mayhem.

Before the most recent political events, bots existed in chat rooms, operated as automated customer service agents on websites, and were a mainstay on dating websites. This familiar form of the bot is known to the majority of the general population as a “chatbot” – for instance, CleverBot was and still is a popular platform to talk to an “AI”. Another prominent example was Microsoft’s failed Twitter Chatbot Tay which made headlines in 2016 when “her” vocabulary and conversation functions were manipulated by Twitter users until “she” espoused neo-nazi views when “she” was subsequently deleted.

Image: XKCD Comic #632.

A Twitter bot is an account controlled by an algorithm or script, which is typically hosted on a cloud platform such as Heroku. They are typically, though not exclusively, scripted to conduct repetitive tasks.  For example, there are bots that retweet content containing particular keywords, reply to new followers, and direct messages to new followers; although they can be used for more complex tasks such as participating in online conversations. Bot accounts make up between 9 and 15% of all active accounts on Twitter; however, it is predicted that they account for a much greater percentage of total Twitter traffic. Twitter bots are generally not created with malicious intent; they are frequently used for online chatting or for raising the professional profile of a corporation – but their ability to pervade our online experience and shape political discourse warrants heightened scrutiny….(More)”.

Online Political Microtargeting: Promises and Threats for Democracy


Frederik Zuiderveen Borgesius et al in Utrecht Law Review: “Online political microtargeting involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to show people-targeted political advertisements. Online political microtargeting is widely used in the US; Europe may not be far behind.

This paper maps microtargeting’s promises and threats to democracy. For example, microtargeting promises to optimise the match between the electorate’s concerns and political campaigns, and to boost campaign engagement and political participation. But online microtargeting could also threaten democracy. For instance, a political party could, misleadingly, present itself as a different one-issue party to different individuals. And data collection for microtargeting raises privacy concerns. We sketch possibilities for policymakers if they seek to regulate online political microtargeting. We discuss which measures would be possible, while complying with the right to freedom of expression under the European Convention on Human Rights….(More)”.

The Rise of Virtual Citizenship


James Bridle in The Atlantic: “In Cyprus, Estonia, the United Arab Emirates, and elsewhere, passports can now be bought and sold….“If you believe you are a citizen of the world, you are a citizen of nowhere. You don’t understand what citizenship means,” the British prime minister, Theresa May, declared in October 2016. Not long after, at his first postelection rally, Donald Trump asserted, “There is no global anthem. No global currency. No certificate of global citizenship. We pledge allegiance to one flag and that flag is the American flag.” And in Hungary, Prime Minister Viktor Orbán has increased his national-conservative party’s popularity with statements like “all the terrorists are basically migrants” and “the best migrant is the migrant who does not come.”

Citizenship and its varying legal definition has become one of the key battlegrounds of the 21st century, as nations attempt to stake out their power in a G-Zero, globalized world, one increasingly defined by transnational, borderless trade and liquid, virtual finance. In a climate of pervasive nationalism, jingoism, xenophobia, and ever-building resentment toward those who move, it’s tempting to think that doing so would become more difficult. But alongside the rise of populist, identitarian movements across the globe, identity itself is being virtualized, too. It no longer needs to be tied to place or nation to function in the global marketplace.

Hannah Arendt called citizenship “the right to have rights.” Like any other right, it can be bestowed and withheld by those in power, but in its newer forms it can also be bought, traded, and rewritten. Virtual citizenship is a commodity that can be acquired through the purchase of real estate or financial investments, subscribed to via an online service, or assembled by peer-to-peer digital networks. And as these options become available, they’re also used, like so many technologies, to exclude those who don’t fit in.

In a world that increasingly operates online, geography and physical infrastructure still remain crucial to control and management. Undersea fiber-optic cables trace the legacy of imperial trading routes. Google and Facebook erect data centers in Scandinavia and the Pacific Northwest, close to cheap hydroelectric power and natural cooling. The trade in citizenship itself often manifests locally as architecture. From luxury apartments in the Caribbean and the Mediterranean to data centers in Europe and refugee settlements in the Middle East, a scattered geography of buildings brings a different reality into focus: one in which political decisions and national laws transform physical space into virtual territory…(More)”.

How Blockchain can benefit migration programmes and migrants


Solon Ardittis at the Migration Data Portal: “According to a recent report published by CB Insights, there are today at least 36 major industries that are likely to benefit from the use of Blockchain technology, ranging from voting procedures, critical infrastructure security, education and healthcare, to car leasing, forecasting, real estate, energy management, government and public records, wills and inheritance, corporate governance and crowdfunding.

In the international aid sector, a number of experiments are currently being conducted to distribute aid funding through the use of Blockchain and thus to improve the tracing of the ways in which aid is disbursed. Among several other examples, the Start Network, which consists of 42 aid agencies across five continents, ranging from large international organizations to national NGOs, has launched a Blockchain-based project that enables the organization both to speed up the distribution of aid funding and to facilitate the tracing of every single payment, from the original donor to each individual assisted.

As Katherine Purvis of The Guardian noted, “Blockchain enthusiasts are hopeful it could be the next big development disruptor. In providing a transparent, instantaneous and indisputable record of transactions, its potential to remove corruption and provide transparency and accountability is one area of intrigue.”

In the field of international migration and refugee affairs, however, Blockchain technology is still in its infancy.

One of the few notable examples is the launch by the United Nations (UN) World Food Programme (WFP) in May 2017 of a project in the Azraq Refugee Camp in Jordan which, through the use of Blockchain technology, enables the creation of virtual accounts for refugees and the uploading of monthly entitlements that can be spent in the camp’s supermarket through the use of an authorization code. Reportedly, the programme has contributed to a reduction by 98% of the bank costs entailed by the use of a financial service provider.

This is a noteworthy achievement considering that organizations working in international relief can lose up to 3.5% of each aid transaction to various fees and costs and that an estimated 30% of all development funds do not reach their intended recipients because of third-party theft or mismanagement.

At least six other UN agencies including the UN Office for Project Services (UNOPS), the UN Development Programme (UNDP), the UN Children’s Fund (UNICEF), UN Women, the UN High Commissioner for Refugees (UNHCR) and the UN Development Group (UNDG), are now considering Blockchain applications that could help support international assistance, particularly supply chain management tools, self-auditing of payments, identity management and data storage.

The potential of Blockchain technology in the field of migration and asylum affairs should therefore be fully explored.

At the European Union (EU) level, while a Blockchain task force has been established by the European Parliament to assess the ways in which the technology could be used to provide digital identities to refugees, and while the European Commission has recently launched a call for project proposals to examine the potential of Blockchain in a range of sectors, little focus has been placed so far on EU assistance in the field of migration and asylum, both within the EU and in third countries with which the EU has negotiated migration partnership agreements.

This is despite the fact that the use of Blockchain in a number of major programme interventions in the field of migration and asylum could help improve not only their cost-efficiency but also, at least as importantly, their degree of transparency and accountability. This at a time when media and civil society organizations exercise increased scrutiny over the quality and ethical standards of such interventions.

In Europe, for example, Blockchain could help administer the EU Asylum, Migration and Integration Fund (AMIF), both in terms of transferring funds from the European Commission to the eligible NGOs in the Member States and in terms of project managers then reporting on spending. This would help alleviate many of the recurrent challenges faced by NGOs in managing funds in line with stringent EU regulations.

As crucially, Blockchain would have the potential to increase transparency and accountability in the channeling and spending of EU funds in third countries, particularly under the Partnership Framework and other recent schemes to prevent irregular migration to Europe.

A case in point is the administration of EU aid in response to the refugee emergency in Greece where, reportedly, there continues to be insufficient oversight of the full range of commitments and outcomes of large EU-funded investments, particularly in the housing sector. Another example is the set of recent programme interventions in Libya, where a growing number of incidents of human rights abuses and financial mismanagement are being brought to light….(More)”.