Big Data and the Computable Society: Algorithms and People in the Digital World


Book by Domenico Talia: “Data and algorithms are changing our life. The awareness of importance and pervasiveness of the digital revolution is the primary element from which to start a path of knowledge to grasp what is happening in the world of big data and digital innovation and to understand these impacts on our minds and relationships between people, traceability and the computability of behavior of individuals and social organizations.

This book analyses contemporary and future issues related to big data, algorithms, data analysis, artificial intelligence and the internet. It introduces and discusses relationships between digital technologies and power, the role of the pervasive algorithms in our life and the risk of technological alienation, the relationships between the use of big data, the privacy of citizens and the exercise of democracy, the techniques of artificial intelligence and their impact on the labor world, the Industry 4.0 at the time of the Internet of Things, social media, open data and public innovation.

Each chapter raises a set of questions and answers to help the reader to know the key issues in the enormous maze that the tools of info-communication have built around us….(More)”.

Democracy as Failure


Paper by Aziz Z. Huq: “The theory and the practice of democracy alike are entangled with the prospect of failure. This is so in the sense that a failure of one kind or another is almost always to be found at democracy’s inception. Further, different kinds of shortfalls dog its implementation. No escape is found in theory, which precipitates internal contradictions that can only be resolved by compromising important democratic values. A stable democratic equilibrium proves elusive because of the tendency of discrete lapses to catalyze wider, systemically disruption. Worse, the very pervasiveness of local failure also obscures the tipping point at which systemic change occurs. Social coordination in defense of democracy is therefore very difficult, and its failure correspondingly more likely. This thicket of intimate entanglements has implications for both the proper description and normative analysis of democracy. At a minimum, the nexus of democracy and failure elucidates the difficulty of dichotomizing democracies into the healthy and the ailing. It illuminates the sound design of democratic institutions by gesturing toward resources usefully deployed to mitigate the costs of inevitable failure. Finally, it casts light on the public psychology best adapted to persisting democracy. To grasp the proximity of democracy’s entanglements with failure is thus to temper the aspiration for popular self-government as a steady-state equilibrium, to open new questions about the appropriate political psychology for a sound democracy, and to limn new questions about democracy’s optimal institutional specification….(More)”.

Data Trusts, Health Data, and the Professionalization of Data Management


Paper by Keith Porcaro: “This paper explores how trusts can provide a legal model for professionalizing health data management. Data is potential. Over time, data collected for one purpose can support others. Clinical records at a hospital, created to manage a patient’s care, can be internally analyzed to identify opportunities for process and safety improvements at a hospital, or externally analyzed with other records to identify optimal treatment patterns. Data also carries the potential for harm. Personal data can be leaked or exposed. Proprietary models can be used to discriminate against patients, or price them out of care.

As novel uses of data proliferate, an individual data holder may be ill-equipped to manage complex new data relationships in a way that maximizes value and minimizes harm. A single organization may be limited by management capacity or risk tolerance. Organizations across sectors have digitized unevenly or late, and may not have mature data controls and policies. Collaborations that involve multiple organizations may face coordination problems, or disputes over ownership.

Data management is still a relatively young field. Most models of external data-sharing are based on literally transferring data—copying data between organizations, or pooling large datasets together under the control of a third party—rather than facilitating external queries of a closely held dataset.

Few models to date have focused on the professional management of data on behalf of a data holder, where the data holder retains control over not only their data, but the inferences derived from their data. Trusts can help facilitate the professionalization of data management. Inspired by the popularity of trusts for managing financial investments, this paper argues that data trusts are well-suited as a vehicle for open-ended professional management of data, where a manager’s discretion is constrained by fiduciary duties and a trust document that defines the data holder’s goals…(More)”.

The Pathologies of Digital Consent


Paper by Neil M. Richards and Woodrow Hartzog: “Consent permeates both our law and our lives — especially in the digital context. Consent is the foundation of the relationships we have with search engines, social networks, commercial web sites, and any one of the dozens of other digitally mediated businesses we interact with regularly. We are frequently asked to consent to terms of service, privacy notices, the use of cookies, and so many other commercial practices. Consent is important, but it’s possible to have too much of a good thing. As a number of scholars have documented, while consent models permeate the digital consumer landscape, the practical conditions of these agreements fall far short of the gold standard of knowing and voluntary consent. Yet as scholars, advocates, and consumers, we lack a common vocabulary for talking about the different ways in which digital consents can be flawed.

This article offers four contributions to improve our understanding of consent in the digital world. First, we offer a conceptual vocabulary of “the pathologies of consent” — a framework for talking about different kinds of defects that consent models can suffer, such as unwitting consent, coerced consent, and incapacitated consent. Second, we offer three conditions for when consent will be most valid in the digital context: when choice is infrequent, when the potential harms resulting from that choice are vivid and easy to imagine, and where we have the correct incentives choose consciously and seriously. The further we fall from these conditions, the more a particular consent will be pathological and thus suspect. Third, we argue that out theory of consent pathologies sheds light on the so-called “privacy paradox” — the notion that there is a gap between what consumers say about wanting privacy and what they actually do in practice. Understanding the “privacy paradox” in terms of consent pathologies shows how consumers are not hypocrites who say one thing but do another. On the contrary, the pathologies of consent reveal how consumers can be nudged and manipulated by powerful companies against their actual interests, and that this process is easier when consumer protection law falls far from the gold standard. In light of these findings, we offer a fourth contribution — the theory of consumer trust we have suggested in prior work and which we further elaborate here as an alternative to our over-reliance on consent and its many pathologies….(More)”.

Data Science for Local Government


Report by Jonathan Bright, Bharath Ganesh, Cathrine Seidelin and Thomas Vogl: “The Data Science for Local Government project was about understanding how the growth of ‘data science’ is changing the way that local government works in the UK. We define data science as a dual shift which involves both bringing in new decision making and analytical techniques to local government work (e.g. machine learning and predictive analytics, artificial intelligence and A/B testing) and also expanding the types of data local government makes use of (for example, by repurposing administrative data, harvesting social media data, or working with mobile phone companies). The emergence of data science is facilitated by the growing availability of free, open-source tools for both collecting data and performing analysis.

Based on extensive documentary review, a nationwide survey of local authorities, and in-depth interviews with over 30 practitioners, we have sought to produce a comprehensive guide to the different types of data science being undertaken in the UK, the types of opportunities and benefits created, and also some of the challenges and difficulties being encountered.

Our aim was to provide a basis for people working in local government to start on their own data science projects, both by providing a library of dozens of ideas which have been tried elsewhere and also by providing hints and tips for overcoming key problems and challenges….(More)”

How AI could save lives without spilling medical secrets


Will Knight at MIT Technology Review: “The potential for artificial intelligence to transform health care is huge, but there’s a big catch.

AI algorithms will need vast amounts of medical data on which to train before machine learning can deliver powerful new ways to spot and understand the cause of disease. That means imagery, genomic information, or electronic health records—all potentially very sensitive information.

That’s why researchers are working on ways to let AI learn from large amounts of medical data while making it very hard for that data to leak.

One promising approach is now getting its first big test at Stanford Medical School in California. Patients there can choose to contribute their medical data to an AI system that can be trained to diagnose eye disease without ever actually accessing their personal details.

Participants submit ophthalmology test results and health record data through an app. The information is used to train a machine-learning model to identify signs of eye disease in the images. But the data is protected by technology developed by Oasis Labs, a startup spun out of UC Berkeley, which guarantees that the information cannot be leaked or misused. The startup was granted permission by regulators to start the trial last week.

The sensitivity of private patient data is a looming problem. AI algorithms trained on data from different hospitals could potentially diagnose illness, prevent disease, and extend lives. But in many countries medical records cannot easily be shared and fed to these algorithms for legal reasons. Research on using AI to spot disease in medical images or data usually involves relatively small data sets, which greatly limits the technology’s promise….

Oasis stores the private patient data on a secure chip, designed in collaboration with other researchers at Berkeley. The data remains within the Oasis cloud; outsiders are able to run algorithms on the data, and receive the results, without its ever leaving the system. A smart contractsoftware that runs on top of a blockchain—is triggered when a request to access the data is received. This software logs how the data was used and also checks to make sure the machine-learning computation was carried out correctly….(More)”.

Publics in Emerging Economies Worry Social Media Sow Division, Even as They Offer New Chances for Political Engagement


Aaron Smith, Laura Silver, Courtney Johnson, Kyle Taylor and Jingjing Jiang at Pew Research: “In recent years, the internet and social media have been integral to political protests, social movements and election campaigns around the globe. Events from the Arab Spring to the worldwide spread of#MeToo have been aided by digital connectivity in both advanced and emerging economies. But popular social media and messaging platforms like Facebook and WhatsApp have drawn attention for their potential role in spreading misinformation, facilitating political manipulation by foreign and domestic actors, and increasing violence and hate crimes.

Recently, the Sri Lankan government shut down several of the country’s social media and messaging services immediately after Easter day bombings at Catholic churches killed and wounded hundreds. Some technology enthusiasts praised the decision but wondered if this development marked a change from pro-democracy, Arab Spring-era hopes that digital technology would be a liberating tool to a new fear that it has become “a force that can corrode” societies.

In the context of these developments, a Pew Research Center survey of adults in 11 emerging economies finds these publics are worried about the risks associated with social media and other communications technologies – even as they cite their benefits in other respects. Succinctly put, the prevailing view in the surveyed countries is that mobile phones, the internet and social media have collectively amplified politics in both positive and negative directions – simultaneously making people more empowered politically andpotentially more exposed to harm.

Chart showing that people in emerging economies see social media giving them political voice but also increasing the risk of manipulation.

When it comes to the benefits, adults in these countries see digital connectivity enhancing people’s access to political information and facilitating engagement with their domestic politics. Majorities in each country say access to the internet, mobile phones and social media has made people more informed about current events, and majorities in most countries believe social media have increased ordinary people’s ability to have a meaningful voice in the political process. Additionally, half or more in seven of these 11 countries say technology has made people more accepting of those who have different views than they do.

But these perceived benefits are frequently accompanied by concerns about the limitations of technology as a tool for political action or information seeking. Even as many say social media have increased the influence of ordinary people in the political process, majorities in eight of these 11 countries feel these platforms have simultaneously increased the risk that people might be manipulated by domestic politicians. Around half or more in eight countries also think these platforms increase the risk that foreign powers might interfere in their country’s elections….(More)”.

How Cold Is That Library? There’s a Google Doc for That


Colleen Flaherty at Inside Higher Ed: “What a difference preparation makes when it comes to doing research in Arctic-level air-conditioned academic libraries (or ones that are otherwise freezing — or not air-conditioned at all). Luckily, Megan L. Cook, assistant professor of English at Colby College, published a crowdsourced document called“How Cold Is that Library?” ….

Cook, who was not immediately available for comment, has said the document was group effort. Juliet Sperling, a faculty fellow in American art at Colby, credited her colleague’s “brilliance” but said the document was “generally inspired by conversations we’ve had as co-fellows” in the Andrew W. Mellon Society of Fellows in Critical Bibliography. The society brings together 60-some scholars of rare books and material texts from a variety of disciplinary or institutional approaches, she said, “so collectively, we’ve all spent quite a bit of time in libraries of various climates all over the world.” In addition to library temperatures, lighting and even humidity levels, the scholars trade research destinations’ photo policies and nearby eateries and drinking holes, among other tips. A spreadsheet opens up that resource to others, Sperling said. …(More)”.

Dark Data Plagues Federal Organizations


Brandi Vincent at NextGov: “While government leaders across the globe are excited about the unleashing artificial intelligence in their organizations, most are struggling with deploying it for their missions because they can’t wrangle their data, a new study suggests.

In a survey released this week, Splunk and TRUE Global Intelligence polled 1,365 global business managers and IT leaders across seven countries. The research indicates that the majority of organizations’ data is “dark,” or unquantified, untapped and usually generated by systems, devices or interactions.

AI runs on data and yet few organizations seem to be able to tap into its value—or even find it.

“Neglected by business and IT managers, dark data is an underused asset that demands a more sophisticated approach to how organizations collect, manage and analyze information,” the report said. “Yet respondents also voiced hesitance about diving in.”

A third of respondents said more than 75% of their organizations’ data is dark and only one in every nine people reports that less than a quarter of their organizations’ data is dark.

Many of the global respondents said a lack of interest from their leadership makes it hard to recover dark data. Another 60% also said more than half of their organizations’ data is not captured and “much of it is not even understood to exist.”

Research also suggests that while almost 100% of respondents believe data skills are critical for jobs in the future, more than half feel too old to learn new skills and 69% are content to keep doing what they are doing, even if it means they won’t be promoted.

“Many say they’d be content to let others take the lead, even at the expense of their own career progress,” the report said.

More than half of the respondents said they don’t understand AI well, as it’s still in its early stages, and 39% said their colleagues and industry don’t get it either. They said few organizations are deploying the new tech right now, but the majority of respondents do see its potential….(More)”.

Echo Chambers May Not Be as Dangerous as You Think, New Study Finds


News Release: “In the wake of the 2016 American presidential election, western media outlets have become almost obsessed with echo chambers. With headlines like “Echo Chambers are Dangerous” and “Are You in a Social Media Echo Chamber?,” news media consumers have been inundated by articles discussing the problems with spending most of one’s time around likeminded people.

But are social bubbles really all that bad? Perhaps not.

A new study from the Annenberg School for Communication at the University of Pennsylvania and the School of Media and Public Affairs at George Washington University, published today in the Proceedings of the National Academy of Sciences, shows that collective intelligence — peer learning within social networks — can increase belief accuracy even in politically homogenous groups.

“Previous research showed that social information processing could work in mixed groups,” says lead author and Annenberg alum Joshua Becker (Ph.D. ’18), who is currently a postdoctoral fellow at Northwestern University’s Kellogg School of Management. “But theories of political polarization argued that social influence within homogenous groups should only amplify existing biases.”

It’s easy to imagine that networked collective intelligence would work when you’re asking people neutral questions, such as how many jelly beans are in a jar. But what about probing hot button political topics? Because people are more likely to adjust the facts of the world to match their beliefs than vice versa, prior theories claimed that a group of people who agree politically would be unable to use collective reasoning to arrive at a factual answer if it challenges their beliefs.

“Earlier this year, we showed that when Democrats and Republicans interact with each other within properly designed social media networks, it can eliminate polarization and improve both groups’ understanding of contentious issues such as climate change,” says senior author Damon Centola, Associate Professor of Communication at the Annenberg School. “Remarkably, our new findings show that properly designed social media networks can even lead to improved understanding of contentious topics within echo chambers.”

Becker and colleagues devised an experiment in which participants answered fact-based questions that stir up political leanings, like “How much did unemployment change during Barack Obama’s presidential administration?” or “How much has the number of undocumented immigrants changed in the last 10 years?” Participants were placed in groups of only Republicans or only Democrats and given the opportunity to change their responses based on the other group members’ answers.

The results show that individual beliefs in homogenous groups became 35% more accurate after participants exchanged information with one another. And although people’s beliefs became more similar to their own party members, they also became more similar to members of the other political party, even without any between-group exchange. This means that even in homogenous groups — or echo chambers — social influence increases factual accuracy and decreases polarization.

“Our results cast doubt on some of the gravest concerns about the role of echo chambers in contemporary democracy,” says co-author Ethan Porter, Assistant Professor of Media and Public Affairs at George Washington University. “When it comes to factual matters, political echo chambers need not necessarily reduce accuracy or increase polarization. Indeed, we find them doing the opposite….(More)… (Full Paper: “The Wisdom of Partisan Crowds“)