Spanning Today’s Chasms: Seven Steps to Building Trusted Data Intermediaries


James Shulman at the Mellon Foundation: “In 2001, when hundreds of individual colleges and universities were scrambling to scan their slide libraries, The Andrew W. Mellon Foundation created a new organization, Artstor, to assemble a massive library of digital images from disparate sources to support teaching and research in the arts and humanities.

Rather than encouraging—or paying for—each school to scan its own slide of the Mona Lisa, the Mellon Foundation created an intermediary organization that would balance the interests of those who created, photographed and cared for art works, such as artists and museums, and those who wanted to use such images for the admirable calling of teaching and studying history and culture.  This organization would reach across the gap that separated these two communities and would respect and balance the interests of both sides, while helping each accomplish their missions.  At the same time that Napster was using technology to facilitate the un-balanced transfer of digital content from creators to users, the Mellon Foundation set up a new institution aimed at respecting the interests of one side of the market and supporting the socially desirable work of the other.

As the internet has enabled the sharing of data across the world, new intermediaries have emerged as entire platforms. A networked world needs such bridges—think Etsy or Ebay sitting between sellers and buyers, or Facebook sitting between advertisers and users. While intermediaries that match sellers and buyers of things provide a marketplace to bridge from one side or the other, aggregators of data work in admittedly more shadowy territories.

In the many realms that market forces won’t support, however, a great deal of public good can be done by aggregating and managing access to datasets that might otherwise continue to live in isolation. Whether due to institutional sociology that favors local solutions, the technical challenges associated with merging heterogeneous databases built with different data models, intellectual property limitations, or privacy concerns, datasets are built and maintained by independent groups that—if networked—could be used to further each other’s work.

Think of those studying coral reefs, or those studying labor practices in developing markets, or child welfare offices seeking to call upon court records in different states, or medical researchers working in different sub-disciplines but on essentially the same disease.  What intermediary invests in joining these datasets?  Many people assume that computers can simply “talk” to each other and share data intuitively, but without targeted investment in connecting them, they can’t.  Unlike modern databases that are now often designed with the cloud in mind, decades of locally created databases churn away in isolation, at great opportunity cost to us all.

Art history research is an unusually vivid example. Most people can understand that if you want to study Caravaggio, you don’t want to hunt and peck across hundreds of museums, books, photo archives, libraries, churches, and private collections.  You want all that content in one place—exactly what Mellon sought to achieve by creating Artstor.

What did we learn in creating Artstor that might be distilled as lessons for others taking on an aggregation project to serve the public good?….(More)”.

Facebook’s next project: American inequality


Nancy Scola at Politico: “Facebook CEO Mark Zuckerberg is quietly cracking open his company’s vast trove of user data for a study on economic inequality in the U.S. — the latest sign of his efforts to reckon with divisions in American society that the social network is accused of making worse.

The study, which hasn’t previously been reported, is mining the social connections among Facebook’s American users to shed light on the growing income disparity in the U.S., where the top 1 percent of households is said to control 40 percent of the country’s wealth. Facebook is an incomparably rich source of information for that kind of research: By one estimate, about three of five American adults use the social network….

Facebook confirmed the broad contours of its partnership with Chetty but declined to elaborate on the substance of the study. Chetty, in a brief interview following a January speech in Washington, said he and his collaborators — who include researchers from Stanford and New York University — have been working on the inequality study for at least six months.

“We’re using social networks, and measuring interactions there, to understand the role of social capital much better than we’ve been able to,” he said.

Researchers say they see Facebook’s enormous cache of data as a remarkable resource, offering an unprecedentedly detailed and sweeping look at American society. That store of information contains both details that a user might tell Facebook — their age, hometown, schooling, family relationships — and insights that the company has picked up along the way, such as the interest groups they’ve joined and geographic distribution of who they call a “friend.”

It’s all the more significant, researchers say, when you consider that Facebook’s user base — about 239 million monthly users in the U.S. and Canada at last count — cuts across just about every demographic group.

And all that information, say researchers, lets them take guesses about users’ wealth. Facebook itself recently patented a way of figuring out someone’s socioeconomic status using factors ranging from their stated hobbies to how many internet-connected devices they own.

A Facebook spokesman addressed the potential privacy implications of the study’s access to user data, saying, “We conduct research at Facebook responsibly, which includes making sure we protect people’s information.” The spokesman added that Facebook follows an “enhanced” review process for research projects, adopted in 2014 after a controversy over a study that manipulated some people’s news feeds to see if it made them happier or sadder.

According to a Stanford University source familiar with Chetty’s study, the Facebook account data used in the research has been stripped of any details that could be used to identify users. The source added that academics involved in the study have gone through security screenings that include background checks, and can access the Facebook data only in secure facilities….(More)”.

The Social Media Threat to Society and Security


George Soros at Project Syndicate: “It takes significant effort to assert and defend what John Stuart Mill called the freedom of mind. And there is a real chance that, once lost, those who grow up in the digital age – in which the power to command and shape people’s attention is increasingly concentrated in the hands of a few companies – will have difficulty regaining it.

The current moment in world history is a painful one. Open societies are in crisis, and various forms of dictatorships and mafia states, exemplified by Vladimir Putin’s Russia, are on the rise. In the United States, President Donald Trump would like to establish his own mafia-style state but cannot, because the Constitution, other institutions, and a vibrant civil society won’t allow it….

The rise and monopolistic behavior of the giant American Internet platform companies is contributing mightily to the US government’s impotence. These companies have often played an innovative and liberating role. But as Facebook and Google have grown ever more powerful, they have become obstacles to innovation, and have caused a variety of problems of which we are only now beginning to become aware…

Social media companies’ true customers are their advertisers. But a new business model is gradually emerging, based not only on advertising but also on selling products and services directly to users. They exploit the data they control, bundle the services they offer, and use discriminatory pricing to keep more of the benefits that they would otherwise have to share with consumers. This enhances their profitability even further, but the bundling of services and discriminatory pricing undermine the efficiency of the market economy.

Social media companies deceive their users by manipulating their attention, directing it toward their own commercial purposes, and deliberately engineering addiction to the services they provide. This can be very harmful, particularly for adolescents.

There is a similarity between Internet platforms and gambling companies. Casinos have developed techniques to hook customers to the point that they gamble away all of their money, even money they don’t have.

Something similar – and potentially irreversible – is happening to human attention in our digital age. This is not a matter of mere distraction or addiction; social media companies are actually inducing people to surrender their autonomy. And this power to shape people’s attention is increasingly concentrated in the hands of a few companies.

It takes significant effort to assert and defend what John Stuart Mill called the freedom of mind. Once lost, those who grow up in the digital age may have difficulty regaining it.

This would have far-reaching political consequences. People without the freedom of mind can be easily manipulated. This danger does not loom only in the future; it already played an important role in the 2016 US presidential election.

There is an even more alarming prospect on the horizon: an alliance between authoritarian states and large, data-rich IT monopolies, bringing together nascent systems of corporate surveillance with already-developed systems of state-sponsored surveillance. This may well result in a web of totalitarian control the likes of which not even George Orwell could have imagined….(More)”.

Free Speech in the Filter Age


Alexandra Borchardt at Project Syndicate: “In a democracy, the rights of the many cannot come at the expense of the rights of the few. In the age of algorithms, government must, more than ever, ensure the protection of vulnerable voices, even erring on victims’ side at times.

Germany’s Network Enforcement Act – according to which social-media platforms like Facebook and YouTube could be fined €50 million ($63 million) for every “obviously illegal” post within 24 hours of receiving a notification – has been controversial from the start. After it entered fully into effect in January, there was a tremendous outcry, with critics from all over the political map arguing that it was an enticement to censorship. Government was relinquishing its powers to private interests, they protested.

So, is this the beginning of the end of free speech in Germany?

Of course not. To be sure, Germany’s Netzwerkdurchsetzungsgesetz (or NetzDG) is the strictest regulation of its kind in a Europe that is growing increasingly annoyed with America’s powerful social-media companies. And critics do have some valid points about the law’s weaknesses. But the possibilities for free expression will remain abundant, even if some posts are deleted mistakenly.

The truth is that the law sends an important message: democracies won’t stay silent while their citizens are exposed to hateful and violent speech and images – content that, as we know, can spur real-life hate and violence. Refusing to protect the public, especially the most vulnerable, from dangerous content in the name of “free speech” actually serves the interests of those who are already privileged, beginning with the powerful companies that drive the dissemination of information.

Speech has always been filtered. In democratic societies, everyone has the right to express themselves within the boundaries of the law, but no one has ever been guaranteed an audience. To have an impact, citizens have always needed to appeal to – or bypass – the “gatekeepers” who decide which causes and ideas are relevant and worth amplifying, whether through the media, political institutions, or protest.

The same is true today, except that the gatekeepers are the algorithms that automatically filter and rank all contributions. Of course, algorithms can be programmed any way companies like, meaning that they may place a premium on qualities shared by professional journalists: credibility, intelligence, and coherence.

But today’s social-media platforms are far more likely to prioritize potential for advertising revenue above all else. So the noisiest are often rewarded with a megaphone, while less polarizing, less privileged voices are drowned out, even if they are providing the smart and nuanced perspectives that can truly enrich public discussions….(More)”.

Republics of Makers: From the Digital Commons to a Flat Marginal Cost Society


Mario Carpo at eFlux: “…as the costs of electronic computation have been steadily decreasing for the last forty years at least, many have recently come to the conclusion that, for most practical purposes, the cost of computation is asymptotically tending to zero. Indeed, the current notion of Big Data is based on the assumption that an almost unlimited amount of digital data will soon be available at almost no cost, and similar premises have further fueled the expectation of a forthcoming “zero marginal costs society”: a society where, except for some upfront and overhead costs (the costs of building and maintaining some facilities), many goods and services will be free for all. And indeed, against all odds, an almost zero marginal cost society is already a reality in the case of many services based on the production and delivery of electricity: from the recording, transmission, and processing of electrically encoded digital information (bits) to the production and consumption of electrical power itself. Using renewable energies (solar, wind, hydro) the generation of electrical power is free, except for the cost of building and maintaining installations and infrastructure. And given the recent progress in the micro-management of intelligent electrical grids, it is easy to imagine that in the near future the cost of servicing a network of very small, local hydro-electric generators, for example, could easily be devolved to local communities of prosumers who would take care of those installations as their tend to their living environment, on an almost voluntary, communal basis.4 This was already often the case during the early stages of electrification, before the rise of AC (alternate current, which, unlike DC, or direct current, could be carried over long distances): AC became the industry’s choice only after Galileo Ferraris’s and Nikola Tesla’s developments in AC technologies in the 1880s.

Likewise, at the micro-scale of the electronic production and processing of bits and bytes of information, the Open Source movement and the phenomenal surge of some crowdsourced digital media (including some so-called social media) in the first decade of the twenty-first century has already proven that a collaborative, zero cost business model can effectively compete with products priced for profit on a traditional marketplace. As the success of Wikipedia, Linux, or Firefox proves, many are happy to volunteer their time and labor for free when all can profit from the collective work of an entire community without having to pay for it. This is now technically possible precisely because the fixed costs of building, maintaining, and delivering these service are very small; hence, from the point of view of the end-user, negligible.

Yet, regardless of the fixed costs of the infrastructure, content—even user-generated content—has costs, albeit for the time being these are mostly hidden, voluntarily born, or inadvertently absorbed by the prosumers themselves. For example, the wisdom of Wikipedia is not really a wisdom of crowds: most Wikipedia entries are de facto curated by fairly traditional scholar communities, and these communities can contribute their expertise for free only because their work has already been paid for by others—often by universities. In this sense, Wikipedia is only piggybacking on someone else’s research investments (but multiplying their outreach, which is one reason for its success). Ditto for most Open Source software, as training a software engineer, coder, or hacker, takes time and money—an investment for future returns that in many countries around the world is still born, at least in part, by public institutions….(More)”.

Crowdsourcing Judgments of News Source Quality


Paper by Gordon Pennycook and David G. Rand: “The spread of misinformation and disinformation, especially on social media, is a major societal challenge. Here, we assess whether crowdsourced ratings of trust in news sources can effectively differentiate between more and less reliable sources. To do so, we ran a preregistered experiment (N = 1,010 from Amazon Mechanical Turk) in which individuals rated familiarity with, and trust in, 60 news sources from three categories: 1) Mainstream media outlets, 2) Websites that produce hyper-partisan coverage of actual facts, and 3) Websites that produce blatantly false content (“fake news”).

Our results indicate that, despite substantial partisan bias, laypeople across the political spectrum rate mainstream media outlets as far more trustworthy than either hyper-partisan or fake news sources (all but 1 mainstream source, Salon, was rated as more trustworthy than every hyper-partisan or fake news source when equally weighting ratings of Democrats and Republicans).

Critically, however, excluding ratings from participants who are not familiar with a given news source dramatically reduces the difference between mainstream media sources and hyper-partisan or fake news sites. For example, 30% of the mainstream media websites (Salon, the Guardian, Fox News, Politico, Huffington Post, and Newsweek) received lower trust scores than the most trusted fake news site (news4ktla.com) when excluding unfamiliar ratings.

This suggests that rather than being initially agnostic about unfamiliar sources, people are initially skeptical – and thus a lack of familiarity is an important cue for untrustworthiness. Overall, our findings indicate that crowdsourcing media trustworthiness judgments is a promising approach for fighting misinformation and disinformation online, but that trustworthiness ratings from participants who are unfamiliar with a given source should not be ignored….(More)”.

Citizens Coproduction, Service Self-Provision and the State 2.0


Chapter by Walter Castelnovo in Network, Smart and Open: “Citizens’ engagement and citizens’ participation are rapidly becoming catch-all concepts, buzzwords continuously recurring in public policy discourses, also due to the widespread diffusion and use of social media that are claimed to have the potential to increase citizens’ participation in public sector processes, including policy development and policy implementation.

By assuming the concept of co-production as the lens through which to look at citizen’s participation in civic life, the paper shows how, when supported by a real redistribution of power between government and citizens, citizens’ participation can determine a transformational impact on the same nature of government, up to the so called ‘Do It Yourself government’ and ‘user-generated state’. Based on a conceptual research approach and with reference to the relevant literature, the paper discusses what such transformation could amount to and what role ICTs (social media) can play in the government transformation processes….(More)”.

Feasibility Study of Using Crowdsourcing to Identify Critical Affected Areas for Rapid Damage Assessment: Hurricane Matthew Case Study


Paper by Faxi Yuan and Rui Liu at the International Journal of Disaster Risk Reduction: “…rapid damage assessment plays a critical role in crisis management. Collection of timely information for rapid damage assessment is particularly challenging during natural disasters. Remote sensing technologies were used for data collection during disasters. However, due to the large areas affected by major disasters such as Hurricane Matthew, specific data cannot be collected in time such as the location information.

Social media can serve as a crowdsourcing platform for citizens’ communication and information sharing during natural disasters and provide the timely data for identifying affected areas to support rapid damage assessment during disasters. Nevertheless, there is very limited existing research on the utility of social media data in damage assessment. Even though some investigation of the relationship between social media activities and damages was conducted, the employment of damage-related social media data in exploring the fore-mentioned relationship remains blank.

This paper for the first time, establishes the index dictionary by semantic analysis for the identification of damage-related tweets posted during Hurricane Matthew in Florida. Meanwhile, the insurance claim data from the publication of Florida Office of Insurance Regulation is used as a representative of real hurricane damage data in Florida. This study performs a correlation analysis and a comparative analysis of the geographic distribution of social media data and damage data at the county level in Florida. We find that employing social media data to identify critical affected areas at the county level during disasters is viable. Damage data has a closer relationship with damage-related tweets than disaster-related tweets….(More)”.

 

Dawn of the techlash


Rachel Botsman at the Guardian: “…Once seen as saviours of democracy, those titans are now just as likely to be viewed as threats to truth or, at the very least, impassive billionaires falling down on the job of monitoring their own backyards.

It wasn’t always this way. Remember the early catchy slogans that emerged from those ping-pong-tabled tech temples in Silicon Valley? “A place for friends”“Don’t be evil” or “You can make money without being evil” (rather poignant, given what was to come). Users were enchanted by the sudden, handheld power of a smartphone to voice anything, access anything; grassroots activist movements revelled in these new tools for spreading their cause. The idealism of social media – democracy, friction-free communication, one-button socialising proved infectious.

So how did that unbridled enthusiasm for all things digital morph into a critical erosion of trust in technology, particularly in politics? Was 2017 the year of reckoning, when technology suddenly crossed to the dark side or had it been heading that way for some time? It might be useful to recall how social media first discovered its political muscle….

Technology is only the means. We also need to ask why our political ideologies have become so polarised, and take a hard look at our own behaviour, as well as that of the politicians themselves and the partisan media outlets who use these platforms, with their vast reach, to sow the seeds of distrust. Why are we so easily duped? Are we unwilling or unable to discern what’s true and what isn’t or to look for the boundaries between opinion, fact and misinformation? But what part are our own prejudices playing?

Luciano Floridi, of the Digital Ethics Lab at Oxford University, points out that technology alone can’t save us from ourselves. “The potential of technology to be a powerful positive force for democracy is huge and is still there. The problems arise when we ignore how technology can accentuate or highlight less attractive sides of human nature,” he says. “Prejudice. Jealousy. Intolerance of different views. Our tendency to play zero sum games. We against them. Saying technology is a threat to democracy is like saying food is bad for you because it causes obesity.”

It’s not enough to blame the messenger. Social media merely amplifies human intent – both good and bad. We need to be honest about our own, age-old appetite for ugly gossip and spreading half-baked information, about our own blindspots.

Is there a solution to it all? Plenty of smart people are working on technical fixes, if for no other reason than the tech companies know it’s in their own best interests to stem the haemorrhaging of trust. Whether they’ll go far enough remains to be seen.

We sometimes forget how uncharted this new digital world remains – it’s a work in progress. We forget that social media, for all its flaws, still brings people together, gives a voice to the voiceless, opens vast wells of information, exposes wrongdoing, sparks activism, allows us to meet up with unexpected strangers. The list goes on. It’s inevitable that there will be falls along the way, deviousness we didn’t foresee. Perhaps the present danger is that in our rush to condemn the corruption of digital technologies, we will unfairly condemn the technologies themselves….(More).

Managing Democracy in the Digital Age


Book edited by Julia Schwanholz, Todd Graham and Peter-Tobias Stoll: “In light of the increased utilization of information technologies, such as social media and the ‘Internet of Things,’ this book investigates how this digital transformation process creates new challenges and opportunities for political participation, political election campaigns and political regulation of the Internet. Within the context of Western democracies and China, the contributors analyze these challenges and opportunities from three perspectives: the regulatory state, the political use of social media, and through the lens of the public sphere.

The first part of the book discusses key challenges for Internet regulation, such as data protection and censorship, while the second addresses the use of social media in political communication and political elections. In turn, the third and last part highlights various opportunities offered by digital media for online civic engagement and protest in the public sphere. Drawing on different academic fields, including political science, communication science, and journalism studies, the contributors raise a number of innovative research questions and provide fascinating theoretical and empirical insights into the topic of digital transformation….(More)”.