The Moral Machine experiment


Jean-François Bonnefon, Iyad Rahwan et al in Nature:  “With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles.

This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available….(More)”.

The State of Open Data 2018


“Figshare’s annual report, The State of Open Data 2018, looks at global attitudes towards open data. It includes survey results of researchers and a collection of articles from industry experts, as well as a foreword from Ross Wilkinson, Director, Global Strategy at Australian Research Data Commons. The report is the third in the series and the survey results continue to show encouraging progress that open data is becoming more embedded in the research community. The key finding is that open data has become more embedded in the research community – 64% of survey respondents reveal they made their data openly available in 2018. However, a surprising number of respondents (60%) had never heard of the FAIR principles, a guideline to enhance the reusability of academic data….(More)”.

The biggest pandemic risk? Viral misinformation


Heidi J. Larson at Nature: “A hundred years ago this month, the death rate from the 1918 influenza was at its peak. An estimated 500 million people were infected over the course of the pandemic; between 50 million and 100 million died, around 3% of the global population at the time.

A century on, advances in vaccines have made massive outbreaks of flu — and measles, rubella, diphtheria and polio — rare. But people still discount their risks of disease. Few realize that flu and its complications caused an estimated 80,000 deaths in the United States alone this past winter, mainly in the elderly and infirm. Of the 183 children whose deaths were confirmed as flu-related, 80% had not been vaccinated that season, according to the US Centers for Disease Control and Prevention.

I predict that the next major outbreak — whether of a highly fatal strain of influenza or something else — will not be due to a lack of preventive technologies. Instead, emotional contagion, digitally enabled, could erode trust in vaccines so much as to render them moot. The deluge of conflicting information, misinformation and manipulated information on social media should be recognized as a global public-health threat.

So, what is to be done? The Vaccine Confidence Project, which I direct, works to detect early signals of rumours and scares about vaccines, and so to address them before they snowball. The international team comprises experts in anthropology, epidemiology, statistics, political science and more. We monitor news and social media, and we survey attitudes. We have also developed a Vaccine Confidence Index, similar to a consumer-confidence index, to track attitudes.

Emotions around vaccines are volatile, making vigilance and monitoring crucial for effective public outreach. In 2016, our project identified Europe as the region with the highest scepticism around vaccine safety (H. J. Larson et al. EBioMedicine 12, 295–301; 2016). The European Union commissioned us to re-run the survey this summer; results will be released this month. In the Philippines, confidence in vaccine safety dropped from 82% in 2015 to 21% in 2018 (H. J. Larson et al. Hum. Vaccines Immunother. https://doi.org/10.1080/21645515.2018.1522468; 2018), after legitimate concerns arose about new dengue vaccines. Immunization rates for established vaccines for tetanus, polio, tetanus and more also plummeted.

We have found that it is useful to categorize misinformation into several levels….(More).

Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security


Paper by Robert Chesney and Danielle Keats Citron: “Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection.

Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors. While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.

Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions….(More)”.

Babbage among the insurers: big 19th-century data and the public interest.


Wilson, D. C. S.  at the History of the Human Sciences: “This article examines life assurance and the politics of ‘big data’ in mid-19th-century Britain. The datasets generated by life assurance companies were vast archives of information about human longevity. Actuaries distilled these archives into mortality tables – immensely valuable tools for predicting mortality and so pricing risk. The status of the mortality table was ambiguous, being both a public and a private object: often computed from company records they could also be extrapolated from quasi-public projects such as the Census or clerical records. Life assurance more generally straddled the line between private enterprise and collective endeavour, though its advocates stressed the public interest in its success. Reforming actuaries such as Thomas Rowe Edmonds wanted the data on which mortality tables were based to be made publicly available, but faced resistance. Such resistance undermined insurers’ claims to be scientific in spirit and hindered Edmonds’s personal quest for a law of mortality. Edmonds pushed instead for an open actuarial science alongside fellow-travellers at the Statistical Society of London, which was populated by statisticians such as William Farr (whose subsequent work, it is argued, was influenced by Edmonds) as well as by radical mathematicians such as Charles Babbage. The article explores Babbage’s little-known foray into the world of insurance, both as a budding actuary but also as a fierce critic of the industry. These debates over the construction, ownership, and accessibility of insurance datasets show that concern about the politics of big data did not begin in the 21st century….(More)”.

Privacy and Synthetic Datasets


Paper by Steven M. Bellovin, Preetam K. Dutta and Nathan Reitinger: “Sharing is a virtue, instilled in us from childhood. Unfortunately, when it comes to big data — i.e., databases possessing the potential to usher in a whole new world of scientific progress — the legal landscape prefers a hoggish motif. The historic approach to the resulting database–privacy problem has been anonymization, a subtractive technique incurring not only poor privacy results, but also lackluster utility. In anonymization’s stead, differential privacy arose; it provides better, near-perfect privacy, but is nonetheless subtractive in terms of utility.

Today, another solution is leaning into the fore, synthetic data. Using the magic of machine learning, synthetic data offers a generative, additive approach — the creation of almost-but-not-quite replica data. In fact, as we recommend, synthetic data may be combined with differential privacy to achieve a best-of-both-worlds scenario. After unpacking the technical nuances of synthetic data, we analyze its legal implications, finding both over and under inclusive applications. Privacy statutes either overweigh or downplay the potential for synthetic data to leak secrets, inviting ambiguity. We conclude by finding that synthetic data is a valid, privacy-conscious alternative to raw data, but is not a cure-all for every situation. In the end, computer science progress must be met with proper policy in order to move the area of useful data dissemination forward….(More)”.

Wearable device data and AI can reduce health care costs and paperwork


Darrell West at Brookings: “Though digital technology has transformed nearly every corner of the economy in recent years, the health care industry seems stubbornly immune to these trends. That may soon change if more wearable devices record medical information that physicians can use to diagnose and treat illnesses at earlier stages. Last month, Apple announced that an FDA-approved electrocardiograph (EKG) will be included in the latest generation Apple Watch to check the heart’s electrical activity for signs of arrhythmia. However, the availability of this data does not guarantee that health care providers are currently equipped to process all of it. To cope with growing amounts of medical data from wearable devices, health care providers may need to adopt artificial intelligence that can identify data trends and spot any deviations that indicate illness. Greater medical data, accompanied by artificial intelligence to analyze it, could expand the capabilities of human health care providers and offer better outcomes at lower costs for patients….

By 2016, American health care spending had already ballooned to 17.9 percent of GDP. The rise in spending saw a parallel rise in health care employment. Patients still need doctors, nurses, and health aides to administer care, yet these health care professionals might not yet be able to make sense of the massive quantities of data coming from wearable devices. Doctors already spend much of their time filling out paperwork, which leaves less time to interact with patients. The opportunity may arise for artificial intelligence to analyze the coming flood of data from wearable devices. Tracking small changes as they happen could make a large difference in diagnosis and treatment: AI could detect abnormal heartbeat, respiration, or other signs that indicate worsening health. Catching symptoms before they worsen may be key to improving health outcomes and lowering costs….(More)”.

Challenges facing social media platforms in conflict prevention in Kenya since 2007: A case of Ushahidi platform


Paper by A.K. Njeru, B. Malakwen and M. Lumala in the International Academic Journal of Social Sciences and Education: “Throughout history information is a key factor in conflict management around the world. The media can play its important role of being the society’s watch dog of the society, by exposing to the masses what is essential but hidden, however the same media may also be used to mobilize masses to violence. Social media can therefore act as a tool for widening the democratic space, but can also lead to destabilization of peace.

The aim of the study was to establish the challenges facing social media platforms in conflict prevention in Kenya since 2007: a case of Ushahidi platform in Kenya. The paradigm that was found suitable for this study is Pragmatism. The study used a mixed approach. In this study, interviews, focus group discussions and content analysis of the Ushahidi platform were chosen as the tools of data collection. In order to bring order, structure and interpretation to the collected data, the researcher systematically organized the data by coding it into categories and constructing matrixes. After classifying the data, the researcher compared and contrasted it to the information retrieved from the literature review.

The study found that One major weak point social media as a tool for conflict prevention is the lack of ethical standards and professionalism for the users. It is too liberal and thus can be used to spread unverified information and distorted facts that might be detrimental to peace building and conflict prevention. This has led to some of the users already questioning the credibility of the information that is circulated through social media. The other weak point about social media as tool for peace building is that it is dependent to a major extent on the access to internet. The availability of internet in low units doesn’t necessarily mean cheap access. So over time the high cost of internet might affect the efficiency of the social media as a tool. The study concluded that information credibility is essential if social media as a tool is to be effective in conflict prevention and peace building.

The nature of social media which allows for anonymity of identity gives room for unverified information to be floated around the social media networks; this can be detrimental to the conflict prevention and peace building initiatives. There is therefore need for information verification and authentication by a trusted agent, to offer information appertaining to violence, conflict prevention and peace building on the social media platforms. The study recommends that Ushahidi platform should be seen as an agent of social change and should discuss the social mobilization which may be able to bring about. The study further suggest that if we can look at Ushahidi platform as a development agent, can we then take this a step further and ask, or try to find, a methodology that looks at the Ushahidi platform as peacemaking agent, or to assist in the maintenance of peace in a post-conflict thereby tapping into Ushahidi platform’s full potential….(More)”.

The Big Blockchain Lie


Nouriel Roubini at Project Syndicate: “Blockchain has been heralded as a potential panacea for everything from poverty and famine to cancer. In fact, it is the most overhyped – and least useful – technology in human history.

In practice, blockchain is nothing more than a glorified spreadsheet. But it has also become the byword for a libertarian ideology that treats all governments, central banks, traditional financial institutions, and real-world currencies as evil concentrations of power that must be destroyed. Blockchain fundamentalists’ ideal world is one in which all economic activity and human interactions are subject to anarchist or libertarian decentralization. They would like the entirety of social and political life to end up on public ledgers that are supposedly “permissionless” (accessible to everyone) and “trustless” (not reliant on a credible intermediary such as a bank).

Yet far from ushering in a utopia, blockchain has given rise to a familiar form of economic hell. A few self-serving white men (there are hardly any women or minorities in the blockchain universe) pretending to be messiahs for the world’s impoverished, marginalized, and unbanked masses claim to have created billions of dollars of wealth out of nothing. But one need only consider the massive centralization of power among cryptocurrency “miners,” exchanges, developers, and wealth holders to see that blockchain is not about decentralization and democracy; it is about greed….

As for blockchain itself, there is no institution under the sun – bank, corporation, non-governmental organization, or government agency – that would put its balance sheet or register of transactions, trades, and interactions with clients and suppliers on public decentralized peer-to-peer permissionless ledgers. There is no good reason why such proprietary and highly valuable information should be recorded publicly.

Moreover, in cases where distributed-ledger technologies – so-called enterprise DLT – are actually being used, they have nothing to do with blockchain. They are private, centralized, and recorded on just a few controlled ledgers. They require permission for access, which is granted to qualified individuals. And, perhaps most important, they are based on trusted authorities that have established their credibility over time. All of which is to say, these are “blockchains” in name only.

It is telling that all “decentralized” blockchains end up being centralized, permissioned databases when they are actually put into use. As such, blockchain has not even improved upon the standard electronic spreadsheet, which was invented in 1979.1

No serious institution would ever allow its transactions to be verified by an anonymous cartel operating from the shadows of the world’s authoritarian kleptocracies. So it is no surprise that whenever “blockchain” has been piloted in a traditional setting, it has either been thrown in the trash bin or turned into a private permissioned database that is nothing more than an Excel spreadsheet or a database with a misleading name….(More)”.

Internet of Things for Smart Cities: Technologies, Big Data and Security


Book by Waleed Ejaz and Alagan Anpalagan: “This book introduces the concept of smart city as the potential solution to the challenges created by urbanization. The Internet of Things (IoT) offers novel features with minimum human intervention in smart cities. This book describes different components of Internet of Things (IoT) for smart cities including sensor technologies, communication technologies, big data analytics and security….(More)”.