Humans are a post-truth species


Yuval Noah Harari at the Guardian: “….A cursory look at history reveals that propaganda and disinformation are nothing new, and even the habit of denying entire nations and creating fake countries has a long pedigree. In 1931 the Japanese army staged mock attacks on itself to justify its invasion of China, and then created the fake country of Manchukuo to legitimise its conquests. China itself has long denied that Tibet ever existed as an independent country. British settlement in Australia was justified by the legal doctrine of terra nullius (“nobody’s land”), which effectively erased 50,000 years of Aboriginal history. In the early 20th century, a favourite Zionist slogan spoke of the return of “a people without a land [the Jews] to a land without a people [Palestine]”. The existence of the local Arab population was conveniently ignored.

In 1969 Israeli prime minister Golda Meir famously said that there is no Palestinian people and never was. Such views are very common in Israel even today, despite decades of armed conflicts against something that doesn’t exist. For example, in February 2016 MP Anat Berko gave a speech in the Israeli parliament in which she doubted the reality and history of the Palestinian people. Her proof? The letter “p” does not even exist in Arabic, so how can there be a Palestinian people? (In Arabic, “F” stands for “P”, and the Arabic name for Palestine is Falastin.)

In fact, humans have always lived in the age of post-truth. Homo sapiens is a post-truth species, whose power depends on creating and believing fictions. Ever since the stone age, self-reinforcing myths have served to unite human collectives. Indeed, Homo sapiensconquered this planet thanks above all to the unique human ability to create and spread fictions. We are the only mammals that can cooperate with numerous strangers because only we can invent fictional stories, spread them around, and convince millions of others to believe in them. As long as everybody believes in the same fictions, we all obey the same laws, and can thereby cooperate effectively.

So if you blame Facebook, Trump or Putin for ushering in a new and frightening era of post-truth, remind yourself that centuries ago millions of Christians locked themselves inside a self-reinforcing mythological bubble, never daring to question the factual veracity of the Bible, while millions of Muslims put their unquestioning faith in the Qur’an. For millennia, much of what passed for “news” and “facts” in human social networks were stories about miracles, angels, demons and witches, with bold reporters giving live coverage straight from the deepest pits of the underworld. We have zero scientific evidence that Eve was tempted by the serpent, that the souls of all infidels burn in hell after they die, or that the creator of the universe doesn’t like it when a Brahmin marries an Untouchable – yet billions of people have believed in these stories for thousands of years. Some fake news lasts for ever.

I am aware that many people might be upset by my equating religion with fake news, but that’s exactly the point. When a thousand people believe some made-up story for one month, that’s fake news. When a billion people believe it for a thousand years, that’s a religion, and we are admonished not to call it fake news in order not to hurt the feelings of the faithful (or incur their wrath). Note, however, that I am not denying the effectiveness or potential benevolence of religion. Just the opposite. For better or worse, fiction is among the most effective tools in humanity’s toolkit. By bringing people together, religious creeds make large-scale human cooperation possible. They inspire people to build hospitals, schools and bridges in addition to armies and prisons. Adam and Eve never existed, but Chartres Cathedral is still beautiful. Much of the Bible may be fictional, but it can still bring joy to billions and encourage humans to be compassionate, courageous and creative – just like other great works of fiction, such as Don QuixoteWar and Peace and Harry Potter….(More)”.

#TrendingLaws: How can Machine Learning and Network Analysis help us identify the “influencers” of Constitutions?


Unicef: “New research by scientists from UNICEF’s Office of Innovation — published today in the journal Nature Human Behaviour — applies methods from network science and machine learning to constitutional law.  UNICEF Innovation Data Scientists Alex Rutherford and Manuel Garcia-Herranz collaborated with computer scientists and political scientists at MIT, George Washington University, and UC Merced to apply data analysis to the world’s constitutions over the last 300 years. This work sheds new light on how to better understand why countries’ laws change and incorporate social rights…

Data science techniques allow us to use methods like network science and machine learning to uncover patterns and insights that are hard for humans to see. Just as we can map influential users on Twitter — and patterns of relations between places to predict how diseases will spread — we can identify which countries have influenced each other in the past and what are the relations between legal provisions.

Why The Science of Constitutions?

One way UNICEF fulfills its mission is through advocacy with national governments — to enshrine rights for minorities, notably children, formally in law. Perhaps the most renowned example of this is the International Convention on the Rights of the Child (ICRC).

Constitutions, such as Mexico’s 1917 constitution — the first to limit the employment of children — are critical to formalizing rights for vulnerable populations. National constitutions describe the role of a country’s institutions, its character in the eyes of the world, as well as the rights of its citizens.

From a scientific standpoint, the work is an important first step in showing that network analysis and machine learning technique can be used to better understand the dynamics of caring for and protecting the rights of children — critical to the work we do in a complex and interconnected world. It shows the significant, and positive policy implications of using data science to uphold children’s rights.

What the Research Shows:

Through this research, we uncovered:

  • A network of relationships between countries and their constitutions.
  • A natural progression of laws — where fundamental rights are a necessary precursor to more specific rights for minorities.
  • The effect of key historical events in changing legal norms….(More)”.

Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject


Nick Couldry and Ulises Mejias in Television & New Media (TVNM): “...Data colonialism combines the predatory extractive practices of historical colonialism with the abstract quantification methods of computing. Understanding Big Data from the Global South means understanding capitalism’s current dependence on this new type of appropriation that works at every point in space where people or things are attached to today’s infrastructures of connection. The scale of this transformation means that it is premature to map the forms of capitalism that will emerge from it on a global scale. Just as historical colonialism over the long-run provided the essential preconditions for the emergence of industrial capitalism, so over time, we can expect that data colonialism will provide the preconditions for a new stage of capitalism that as yet we can barely imagine, but for which the appropriation of human life through data will be central.

Right now, the priority is not to speculate about that eventual stage of capitalism, but to resist the data colonialism that is under way. This is how we understand Big Data from the South. Through what we call ‘data relations’ (new types of human relations which enable the extraction of data for commodification), social life all over the globe becomes an ‘open’ resource for extraction that is somehow ‘just there’ for capital. These global flows of data are as expansive as historic colonialism’s appropriation of land, resources, and bodies, although the epicentre has somewhat shifted. Data colonialism involves not one pole of colonial power (‘the West’), but at least two: the USA and China. This complicates our notion of the geography of the Global South, a concept which until now helped situate resistance and disidentification along geographic divisions between former colonizers and colonized. Instead, the new data colonialism works both externally — on a global scale — and internally on its own home populations. The elites of data colonialism (think of Facebook) benefit from colonization in both dimensions, and North-South, East-West divisions no longer matter in the same way.

It is important to acknowledge both the apparent similarities and the significant differences between our argument and the many preceding critical arguments about Big Data…(More)”

Technology, Activism, and Social Justice in a Digital Age


Book edited by John G. McNutt: “…offers a close look at both the present nature and future prospects for social change. In particular, the text explores the cutting edge of technology and social change, while discussing developments in social media, civic technology, and leaderless organizations — as well as more traditional approaches to social change.

It effectively assembles a rich variety of perspectives to the issue of technology and social change; the featured authors are academics and practitioners (representing both new voices and experienced researchers) who share a common devotion to a future that is just, fair, and supportive of human potential.

They come from the fields of social work, public administration, journalism, law, philanthropy, urban affairs, planning, and education, and their work builds upon 30-plus years of research. The authors’ efforts to examine changing nature of social change organizations and the issues they face will help readers reflect upon modern advocacy, social change, and the potential to utilize technology in making a difference….(More)”

From Code to Cure


David J. Craig at Columbia Magazine: “Armed with enormous amounts of clinical data, teams of computer scientists, statisticians, and physicians are rewriting the rules of medical research….

The deluge is upon us.

We are living in the age of big data, and with every link we click, every message we send, and every movement we make, we generate torrents of information.

In the past two years, the world has produced more than 90 percent of all the digital data that has ever been created. New technologies churn out an estimated 2.5 quintillion bytes per day. Data pours in from social media and cell phones, weather satellites and space telescopes, digital cameras and video feeds, medical records and library collections. Technologies monitor the number of steps we walk each day, the structural integrity of dams and bridges, and the barely perceptible tremors that indicate a person is developing Parkinson’s disease. These are the building blocks of our knowledge economy.

This tsunami of information is also providing opportunities to study the world in entirely new ways. Nowhere is this more evident than in medicine. Today, breakthroughs are being made not just in labs but on laptops, as biomedical researchers trained in mathematics, computer science, and statistics use powerful new analytic tools to glean insights from enormous data sets and help doctors prevent, treat, and cure disease.

“The medical field is going through a major period of transformation, and many of the changes are driven by information technology,” says George Hripcsak ’85PS,’00PH, a physician who chairs the Department of Biomedical Informatics at Columbia University Irving Medical Center (CUIMC). “Diagnostic techniques like genomic screening and high-resolution imaging are generating more raw data than we’ve ever handled before. At the same time, researchers are increasingly looking outside the confines of their own laboratories and clinics for data, because they recognize that by analyzing the huge streams of digital information now available online they can make discoveries that were never possible before.” …

Consider, for example, what the young computer scientist has been able to accomplish in recent years by mining an FDA database of prescription-drug side effects. The archive, which contains millions of reports of adverse drug reactions that physicians have observed in their patients, is continuously monitored by government scientists whose job it is to spot problems and pull drugs off the market if necessary. And yet by drilling down into the database with his own analytic tools, Tatonetti has found evidence that dozens of commonly prescribed drugs may interact in dangerous ways that have previously gone unnoticed. Among his most alarming findings: the antibiotic ceftriaxone, when taken with the heartburn medication lansoprazole, can trigger a type of heart arrhythmia called QT prolongation, which is known to cause otherwise healthy people to suddenly drop dead…(More)”

Predicting Public Interest Issue Campaign Participation on Social Media


Jungyun Won, Linda Hon, Ah Ram Lee in the Journal of Public Interest Communication: “This study investigates what motivates people to participate in a social media campaign in the context of animal protection issues.

Structural equation modeling (SEM) tested a proposed research model with survey data from 326 respondents.

Situational awareness, participation benefits, and social ties influence were positive predictors of social media campaign participation intentions. Situational awareness also partially mediates the relationship between participation benefits and participation intentions as well as strong ties influence and participation intentions.

When designing social media campaigns, public interest communicators should raise situational awareness and emphasize participation benefits. Messages shared through social networks, especially via strong ties, also may be more effective than those posted only on official websites or social networking sites (SNSs)….(More)”.

Ethics as Methods: Doing Ethics in the Era of Big Data Research—Introduction


Introduction to the Special issue of Social Media + Society on “Ethics as Methods: Doing Ethics in the Era of Big Data Research”: Building on a variety of theoretical paradigms (i.e., critical theory, [new] materialism, feminist ethics, theory of cultural techniques) and frameworks (i.e., contextual integrity, deflationary perspective, ethics of care), the Special Issue contributes specific cases and fine-grained conceptual distinctions to ongoing discussions about the ethics in data-driven research.

In the second decade of the 21st century, a grand narrative is emerging that posits knowledge derived from data analytics as true, because of the objective qualities of data, their means of collection and analysis, and the sheer size of the data set. The by-product of this grand narrative is that the qualitative aspects of behavior and experience that form the data are diminished, and the human is removed from the process of analysis.

This situates data science as a process of analysis performed by the tool, which obscures human decisions in the process. The scholars involved in this Special Issue problematize the assumptions and trends in big data research and point out the crisis in accountability that emerges from using such data to make societal interventions.

Our collaborators offer a range of answers to the question of how to configure ethics through a methodological framework in the context of the prevalence of big data, neural networks, and automated, algorithmic governance of much of human socia(bi)lity…(More)”.

The Data Transfer Project


About: “The Data Transfer Project was formed in 2017 to create an open-source, service-to-service data portability platform so that all individuals across the web could easily move their data between online service providers whenever they want.

The contributors to the Data Transfer Project believe portability and interoperability are central to innovation. Making it easier for individuals to choose among services facilitates competition, empowers individuals to try new services and enables them to choose the offering that best suits their needs.

Current contributors include Facebook, Google, Microsoft and Twitter.

Individuals have many reasons to transfer data, but we want to highlight a few examples that demonstrate the additional value of service-to-service portability.

  • A user discovers a new photo printing service offering beautiful and innovative photo book formats, but their photos are stored in their social media account. With the Data Transfer Project, they could visit a website or app offered by the photo printing service and initiate a transfer directly from their social media platform to the photo book service.
  • A user doesn’t agree with the privacy policy of their music service. They want to stop using it immediately, but don’t want to lose the playlists they have created. Using this open-source software, they could use the export functionality of the original Provider to save a copy of their playlists to the cloud. This enables them to import the lists to a new Provider, or multiple Providers, once they decide on a new service.
  • A large company is getting requests from customers who would like to import data from a legacy Provider that is going out of business. The legacy Provider has limited options for letting customers move their data. The large company writes an Adapter for the legacy Provider’s Application Program Interfaces (APIs) that permits users to transfer data to their service, also benefiting other Providers that handle the same data type.
  • A user in a low bandwidth area has been working with an architect on drawings and graphics for a new house. At the end of the project, they both want to transfer all the files from a shared storage system to the user’s cloud storage drive. They go to the cloud storage Data Transfer Project User Interface (UI) and move hundreds of large files directly, without straining their bandwidth.
  • An industry association for supermarkets wants to allow customers to transfer their loyalty card data from one member grocer to another, so they can get coupons based on buying habits between stores. The Association would do this by hosting an industry-specific Host Platform of DTP.

The innovation in each of these examples lies behind the scenes: Data Transfer Project makes it easy for Providers to allow their customers to interact with their data in ways their customers would expect. In most cases, the direct-data transfer experience will be branded and managed by the receiving Provider, and the customer wouldn’t need to see DTP branding or infrastructure at all….

To get a more in-depth understanding of the project, its fundamentals and the details involved, please download “Data Transfer Project Overview and Fundamentals”….(More)”.

Let’s make private data into a public good


Article by Mariana Mazzucato: “The internet giants depend on our data. A new relationship between us and them could deliver real value to society….We should ask how the value of these companies has been created, how that value has been measured, and who benefits from it. If we go by national accounts, the contribution of internet platforms to national income (as measured, for example, by GDP) is represented by the advertisement-related services they sell. But does that make sense? It’s not clear that ads really contribute to the national product, let alone to social well-being—which should be the aim of economic activity. Measuring the value of a company like Google or Facebook by the number of ads it sells is consistent with standard neoclassical economics, which interprets any market-based transaction as signaling the production of some kind of output—in other words, no matter what the thing is, as long as a price is received, it must be valuable. But in the case of these internet companies, that’s misleading: if online giants contribute to social well-being, they do it through the services they provide to users, not through the accompanying advertisements.

This way we have of ascribing value to what the internet giants produce is completely confusing, and it’s generating a paradoxical result: their advertising activities are counted as a net contribution to national income, while the more valuable services they provide to users are not.

Let’s not forget that a large part of the technology and necessary data was created by all of us, and should thus belong to all of us. The underlying infrastructure that all these companies rely on was created collectively (via the tax dollars that built the internet), and it also feeds off network effects that are produced collectively. There is indeed no reason why the public’s data should not be owned by a public repository that sells the data to the tech giants, rather than vice versa. But the key issue here is not just sending a portion of the profits from data back to citizens but also allowing them to shape the digital economy in a way that satisfies public needs. Using big data and AI to improve the services provided by the welfare state—from health care to social housing—is just one example.

Only by thinking about digital platforms as collective creations can we construct a new model that offers something of real value, driven by public purpose. We’re never far from a media story that stirs up a debate about the need to regulate tech companies, which creates a sense that there’s a war between their interests and those of national governments. We need to move beyond this narrative. The digital economy must be subject to the needs of all sides; it’s a partnership of equals where regulators should have the confidence to be market shapers and value creators….(More)”.

Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates


Marshall Allen at ProPublica: “With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

Are you a woman who recently changed your name? You could be newly married and have a pricey pregnancy pending. Or maybe you’re stressed and anxious from a recent divorce. That, too, the computer models predict, may run up your medical bills.

Are you a woman who’s purchased plus-size clothing? You’re considered at risk of depression. Mental health care can be expensive.

Low-income and a minority? That means, the data brokers say, you are more likely to live in a dilapidated and dangerous neighborhood, increasing your health risks.

“We sit on oceans of data,” said Eric McCulley, director of strategic solutions for LexisNexis Risk Solutions, during a conversation at the data firm’s booth. And he isn’t apologetic about using it. “The fact is, our data is in the public domain,” he said. “We didn’t put it out there.”

Insurers contend they use the information to spot health issues in their clients — and flag them so they get services they need. And companies like LexisNexis say the data shouldn’t be used to set prices. But as a research scientist from one company told me: “I can’t say it hasn’t happened.”

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”…(More)”.