Misinformation on social media: Can technology save us?


 at the Conversation: “…Since we cannot pay attention to all the posts in our feeds, algorithms determine what we see and what we don’t. The algorithms used by social media platforms today are designed to prioritize engaging posts – ones we’re likely to click on, react to and share. But a recent analysis found intentionally misleading pages got at least as much online sharing and reaction as real news.

This algorithmic bias toward engagement over truth reinforces our social and cognitive biases. As a result, when we follow links shared on social media, we tend to visit a smaller, more homogeneous set of sources than when we conduct a search and visit the top results.

Existing research shows that being in an echo chamber can make people more gullible about accepting unverified rumors. But we need to know a lot more about how different people respond to a single hoax: Some share it right away, others fact-check it first.

We are simulating a social network to study this competition between sharing and fact-checking. We are hoping to help untangle conflicting evidence about when fact-checking helps stop hoaxes from spreading and when it doesn’t. Our preliminary results suggest that the more segregated the community of hoax believers, the longer the hoax survives. Again, it’s not just about the hoax itself but also about the network.

Many people are trying to figure out what to do about all this. According to Mark Zuckerberg’s latest announcement, Facebook teams are testing potential options. And a group of college students has proposed a way to simply label shared links as “verified” or not.

Some solutions remain out of reach, at least for the moment. For example, we can’t yet teach artificial intelligence systems how to discern between truth and falsehood. But we can tell ranking algorithms to give higher priority to more reliable sources…..

We can make our fight against fake news more efficient if we better understand how bad information spreads. If, for example, bots are responsible for many of the falsehoods, we can focus attention on detecting them. If, alternatively, the problem is with echo chambers, perhaps we could design recommendation systems that don’t exclude differing views….(More)”

New Data Portal to analyze governance in Africa


Social Media’s Globe-Shaking Power


…Over much of the last decade, we have seen progressive social movementspowered by the web spring up across the world. There was the Green Revolution in Iran and the Arab Spring in the Middle East and North Africa. In the United States, we saw the Occupy Wall Street movement andthe #BlackLivesMatter protests.

Social networks also played a role in electoral politics — first in the ultimately unsuccessful candidacy of Howard Dean in 2003, and then in the election of the first African-American president in 2008.

Yet now those movements look like the prelude to a wider, tech-powered crack up in the global order. In Britain this year, organizing on Facebook played a major role in the once-unthinkable push to get the country to leave the European Union. In the Philippines, Rodrigo Duterte, a firebrand mayor who was vastly outspent by opponents, managed to marshal a huge army of online supporters to help him win the presidency.

The Islamic State has used social networks to recruit jihadists from around the world to fight in Iraq and Syria, as well as to inspire terrorist attacks overseas.

And in the United States, both Bernie Sanders, a socialist who ran for president as a Democrat, and Mr. Trump, who was once reviled by most members of the party he now leads, relied on online movements to shatter the political status quo.

Why is this all happening now? Clay Shirky, a professor at New York University who has studied the effects of social networks, suggested a few reasons.

One is the ubiquity of Facebook, which has reached a truly epic scale. Last month the company reported that about 1.8 billion people now log on to the service every month. Because social networks feed off the various permutations of interactions among people, they become strikingly more powerful as they grow. With about a quarter of the world’s population now on Facebook, the possibilities are staggering.

“When the technology gets boring, that’s when the crazy social effects get interesting,” Mr. Shirky said.

One of those social effects is what Mr. Shirky calls the “shifting of the Overton Window,” a term coined by the researcher Joseph P. Overton to describe the range of subjects that the mainstream media deems publicly acceptable to discuss.

From about the early 1980s until the very recent past, it was usually considered unwise for politicians to court views deemed by most of society to be out of the mainstream, things like overt calls to racial bias (there were exceptions, of course, like the Willie Horton ad). But the internet shifted that window.

“White ethno nationalism was kept at bay because of pluralistic ignorance,”Mr. Shirky said. “Every person who was sitting in their basement yelling at the TV about immigrants or was willing to say white Christians were more American than other kinds of Americans — they didn’t know how many others shared their views.”

Thanks to the internet, now each person with once-maligned views can see that he’s not alone. And when these people find one another, they can do things — create memes, publications and entire online worlds that bolster their worldview, and then break into the mainstream. The groups also become ready targets for political figures like Mr. Trump, who recognize their energy and enthusiasm and tap into it for real-world victories.

Mr. Shirky notes that the Overton Window isn’t just shifting on the right. We see it happening on the left, too. Mr. Sanders campaigned on an anti-Wall Street platform that would have been unthinkable for a Democrat just a decade ago….(More)”

Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment


Kevin Munger in Political Behavior: “I conduct an experiment which examines the impact of group norm promotion and social sanctioning on racist online harassment. Racist online harassment de-mobilizes the minorities it targets, and the open, unopposed expression of racism in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur. This paper extends findings from lab experiments to a naturalistic setting using an objective, behavioral outcome measure and a continuous 2-month data collection period. This represents an advance in the study of prejudiced behavior….(More)”

Co-Creating the Cities of the Future


Essay by Luis Muñoz in the Special Issue of “Sensors” on Smart City: Vision and Reality : “In recent years, the evolution of urban environments, jointly with the progress of the Information and Communication sector, have enabled the rapid adoption of new solutions that contribute to the growth in popularity of Smart Cities. Currently, the majority of the world population lives in cities encouraging different stakeholders within these innovative ecosystems to seek new solutions guaranteeing the sustainability and efficiency of such complex environments. In this work, it is discussed how the experimentation with IoT technologies and other data sources form the cities can be utilized to co-create in the OrganiCity project, where key actors like citizens, researchers and other stakeholders shape smart city services and applications in a collaborative fashion. Furthermore, a novel architecture is proposed that enables this organic growth of the future cities, facilitating the experimentation that tailors the adoption of new technologies and services for a better quality of life, as well as agile and dynamic mechanisms for managing cities. In this work, the different components and enablers of the OrganiCity platform are presented and discussed in detail and include, among others, a portal to manage the experiment life cycle, an Urban Data Observatory to explore data assets, and an annotations component to indicate quality of data, with a particular focus on the city-scale opportunistic data collection service operating as an alternative to traditional communications. (View Full-Text)”

Shareveillance: Subjectivity between open and closed data


Clare Birchall in Big Data and Society: “This article attempts to question modes of sharing and watching to rethink political subjectivity beyond that which is enabled and enforced by the current data regime. It identifies and examines a ‘shareveillant’ subjectivity: a form configured by the sharing and watching that subjects have to withstand and enact in the contemporary data assemblage. Looking at government open and closed data as case studies, this article demonstrates how ‘shareveillance’ produces an anti-political role for the public. In describing shareveillance as, after Jacques Rancière, a distribution of the (digital) sensible, this article posits a politico-ethical injunction to cut into the share and flow of data in order to arrange a more enabling assemblage of data and its affects. In order to interrupt shareveillance, this article borrows a concept from Édouard Glissant and his concern with raced otherness to imagine what a ‘right to opacity’ might mean in the digital context. To assert this right is not to endorse the individual subject in her sovereignty and solitude, but rather to imagine a collective political subjectivity and relationality according to the important question of what it means to ‘share well’ beyond the veillant expectations of the state.

Two questions dominate current debates at the intersection of privacy, governance, security, and transparency: How much, and what kind of data should citizens have to share with surveillant states? And: How much data from government departments should states share with citizens? Yet, these issues are rarely expressed in terms of ‘sharing’ in the way that I will be doing in this article. More often, when thought in tandem with the digital, ‘sharing’ is used in reference to either free trials of software (‘shareware’); the practice of peer-to-peer file sharing; platforms that facilitate the pooling, borrowing, swapping, renting, or selling of resources, skills, and assets that have come to be known as the ‘sharing economy’; or the business of linking and liking on social media, which invites us to share our feelings, preferences, thoughts, interests, photographs, articles, and web links. Sharing in the digital context has been framed as a form of exchange, then, but also communication and distribution (see John, 2013; Wittel, 2011).

In order to understand the politics of open and opaque government data practices, which either share with citizens or ask citizens to share, I will extend existing commentaries on the distributive qualities of sharing by drawing on Jacques Rancière’s notion of the ‘distribution of the sensible’ (2004a) – a settlement that determines what is visible, audible, sayable, knowable and what share or role we each have within it. In the process, I articulate ‘sharing’ with ‘veillance’ (veiller ‘to watch’ is from the Latin vigilare, from vigil, ‘watchful’) to turn the focus from prevalent ways of understanding digital sharing towards a form of contemporary subjectivity. What I call ‘shareveillance’ – a state in which we are always already sharing; indeed, in which any relationship with data is only made possible through a conditional idea of sharing – produces an anti-politicised public caught between different data practices.

I will argue that both open and opaque government data initiatives involve, albeit differently pitched, forms of sharing and veillance. Government practices that share data with citizens involve veillance because they call on citizens to monitor and act upon that data – we are envisioned (‘veiled’ and hailed) as auditing and entrepreneurial subjects. Citizens have to monitor the state’s data, that is, or they are expected to innovate with it and make it profitable. Data sharing therefore apportions responsibility without power. It watches citizens watching the state, delimiting the ways in which citizens can engage with that data and, therefore, the scope of the political per se….(More)”.

Information Isn’t Just Power


Review by Lucy Bernholz  in the Stanford Social Innovation Review:  “Information is power.” This truism pervades Missed Information, an effort by two scientists to examine the role that information now plays as the raw material of modern scholarship, public policy, and institutional behavior. The scholars—David Sarokin, an environmental scientist for the US government, and Jay Schulkin, a research professor of neuroscience at Georgetown University—make this basic case convincingly. In its ever-present, digital, and networked form, data doesn’t just shape government policies and actions—it also creates its own host of controversies. Government policies about collecting, storing, and analyzing information fuel protests and political lobbying, opposing movements for openness and surveillance, and individual acts seen as both treason and heroism. The very fact that two scholars from such different fields are collaborating on this subject is evidence that digitized information has become the lingua franca of present-day affairs.

To Sarokin and Schulkin, the main downside to all this newly available information is that it creates an imbalance of power in who can access and control it. Governments and businesses have visibility into the lives of citizens and customers that is not reciprocated. The US government knows our every move, but we know what our government is doing only when a whistleblower tells us. Businesses have ever more data and ever-finer ways to sort and sift it, yet customers know next to nothing about what is being done with it.

The authors argue, however, that new digital networks also provide opportunities to recalibrate the balance of information and return some power to ordinary citizens. These negotiations are under way all around us. Our current political debates about security versus privacy, and the nature and scope of government transparency, show how the lines of control between governments and the governed are being redrawn. In health care, consumers, advocates, and public policymakers are starting to create online ratings of hospitals, doctors, and the costs of medical procedures. The traditional oneway street of corporate annual reporting is being supplemented by consumer ratings, customer feedback loops, and new information about supply chains and environmental and social factors. Sarokin and Schulkin go to great lengths to show the potential of tools such as comparison guides for patients or sustainability indices for shoppers to enable more informed user decisions.

This argument is important, but it is incomplete. The book’s title, Missed Information, refers to “information that is unintentionally (for the most part) overlooked in the decision-making process—overlooked both by those who provide information and by those who use it.” What is missing from the book, ironically, is a compelling discussion of why this “missed information” is missing. ….

Grouping the book with others of the “Big Data Will Save Us” genre isn’t entirely fair. Sarokin and Schulkin go to great lengths to point out how much of the information we collect is never used for anything, good or bad….(More)”

Is Open Data the Death of FOIA?


Beth Noveck at the Yale Law Journal: “For fifty years, the Freedom of Information Act (FOIA) has been the platinum standard for open government in the United States. The statute is considered the legal bedrock of the public’s right to know about the workings of our government. More than one hundred countries and all fifty states have enacted their own freedom of information laws. At the same time, FOIA’s many limitations have also become evident: a cumbersome process, delays in responses, and redactions that frustrate journalists and other information seekers. Politically-motivated nuisance requests bedevil government agencies.With over 700,000 FOIA requests filed every year, the federal government faces the costs of a mounting backlog.

In recent years, however, an entirely different approach to government transparency in line with the era of big data has emerged: open government data. Open government data —generally shortened to open data—has many definitions but is generally considered to be publicly available information that can be universally and readily accessed, used, and redistributed free of charge in digital form. Open data is not limited to statistics, but also includes text such as the United States Federal Register, the daily newspaper of government, which was released as open data in bulk form in 2010.

To understand how significant the open data movement is for FOIA, this Essay discusses the impact of open data on the institutions and functions of government and the ways open data contrasts markedly with FOIA. Open data emphasizes the proactive publication of whole classes of information. Open data includes data about the workings of government but also data collected by the government about the economy and society posted online in a centralized repository for use by the wider public, including academic users seeking information as the basis for original research and commercial users looking to create new products and services. For example, Pixar used open data from the United States Geological Survey to create more realistic detail in scenes from its movie The Good Dinosaur.

By contrast, FOIA promotes ex post publication of information created by the government especially about its own workings in response to specific demands by individual requestors. I argue that open data’s more systematic and collaborative approach represents a radical and welcome departure from FOIA because open data concentrates on information as a means to solve problems to the end of improving government effectiveness. Open data is legitimated by the improved outcomes it yields and grounded in a theory of government effectiveness and, as a result, eschews the adversarial and ad hoc FOIA approach. Ultimately, however, each tactic offers important complementary benefits. The proactive information disclosure regime of open data is strengthened by FOIA’s rights of legal enforcement. Together, they stand to become the hallmark of government transparency in the fifty years ahead….(More)”.

Comparing resistance to open data performance measurement


Paper by Gregory Michener and Otavio Ritter in Public Administration : “Much is known about governmental resistance to disclosure laws, less so about multi-stakeholder resistance to open data. This study compares open data initiatives within the primary and secondary school systems of Brazil and the UK, focusing on stakeholder resistance and corresponding policy solutions. The analytical framework is based on the ‘Three-Ps’ of open data resistance to performance metrics, corresponding to professional, political, and privacy-related concerns. Evidence shows that resistance is highly nuanced, as stakeholders alternately serve as both principals and agents. School administrators, for example, are simultaneously principals to service providers and teachers, and at once agents to parents and politicians. Relying on a different systems comparison, in-depth interviews, and newspaper content analyses, we find that similar stakeholders across countries demonstrate strikingly divergent levels of resistance. In overcoming stakeholder resistance – across socioeconomic divides – context conscientious ‘data-informed’ evaluations may promote greater acceptance than narrowly ‘data-driven’ performance measurements…(More)”

Towards a DataPlace: mapping data in a game to encourage participatory design in smart cities


Paper by Barker, Matthew; Wolff, Annika and van der Linden, Janet: “The smart city has been envisioned as a place where citizens can participate in city decision making and in the design of city services. As a key part of this vision, pervasive digital technology and open data legislation are being framed as vehicles for citizens to access rich data about their city. It has become apparent though, that simply providing access to these resources does not automatically lead to the development of data-driven applications. If we are going to engage more of the citizenry in smart city design and raise productivity, we are going to need to make the data itself more accessible, engaging and intelligible for non-experts. This ongoing study is exploring one method for doing so. As part of the MK:Smart City project team, we are developing a tangible data look-up interface that acts as an alternative to the conventional DataBase. This interface, or DataPlace as we are calling it, takes the form of a map, which the user places sensors on to physically capture real-time data. This is a simulation of the physical act of capturing data in the real world. We discuss the design of the DataPlace prototype under development and the planned user trials to test out our hypothesis; that a DataPlace can make handling data more accessible, intelligible and engaging for non-experts than conventional interface types….(More)”