Handbook of Research on Citizen Engagement and Public Participation in the Era of New Media


Book edited by Marco Adria and Yuping Mao: “New media forums have created a unique opportunity for citizens to participate in a variety of social and political contexts. As new social technologies are being utilized in a variety of ways, the public is able to interact more effectively in activities within their communities.

The Handbook of Research on Citizen Engagement and Public Participation in the Era of New Media addresses opportunities and challenges in the theory and practice of public involvement in social media. Highlighting various communication modes and best practices being utilized in citizen-involvement activities, this book is a critical reference source for professionals, consultants, university teachers, practitioners, community organizers, government administrators, citizens, and activists….(More)

 

Misinformation on social media: Can technology save us?


 at the Conversation: “…Since we cannot pay attention to all the posts in our feeds, algorithms determine what we see and what we don’t. The algorithms used by social media platforms today are designed to prioritize engaging posts – ones we’re likely to click on, react to and share. But a recent analysis found intentionally misleading pages got at least as much online sharing and reaction as real news.

This algorithmic bias toward engagement over truth reinforces our social and cognitive biases. As a result, when we follow links shared on social media, we tend to visit a smaller, more homogeneous set of sources than when we conduct a search and visit the top results.

Existing research shows that being in an echo chamber can make people more gullible about accepting unverified rumors. But we need to know a lot more about how different people respond to a single hoax: Some share it right away, others fact-check it first.

We are simulating a social network to study this competition between sharing and fact-checking. We are hoping to help untangle conflicting evidence about when fact-checking helps stop hoaxes from spreading and when it doesn’t. Our preliminary results suggest that the more segregated the community of hoax believers, the longer the hoax survives. Again, it’s not just about the hoax itself but also about the network.

Many people are trying to figure out what to do about all this. According to Mark Zuckerberg’s latest announcement, Facebook teams are testing potential options. And a group of college students has proposed a way to simply label shared links as “verified” or not.

Some solutions remain out of reach, at least for the moment. For example, we can’t yet teach artificial intelligence systems how to discern between truth and falsehood. But we can tell ranking algorithms to give higher priority to more reliable sources…..

We can make our fight against fake news more efficient if we better understand how bad information spreads. If, for example, bots are responsible for many of the falsehoods, we can focus attention on detecting them. If, alternatively, the problem is with echo chambers, perhaps we could design recommendation systems that don’t exclude differing views….(More)”

Social Media’s Globe-Shaking Power


…Over much of the last decade, we have seen progressive social movementspowered by the web spring up across the world. There was the Green Revolution in Iran and the Arab Spring in the Middle East and North Africa. In the United States, we saw the Occupy Wall Street movement andthe #BlackLivesMatter protests.

Social networks also played a role in electoral politics — first in the ultimately unsuccessful candidacy of Howard Dean in 2003, and then in the election of the first African-American president in 2008.

Yet now those movements look like the prelude to a wider, tech-powered crack up in the global order. In Britain this year, organizing on Facebook played a major role in the once-unthinkable push to get the country to leave the European Union. In the Philippines, Rodrigo Duterte, a firebrand mayor who was vastly outspent by opponents, managed to marshal a huge army of online supporters to help him win the presidency.

The Islamic State has used social networks to recruit jihadists from around the world to fight in Iraq and Syria, as well as to inspire terrorist attacks overseas.

And in the United States, both Bernie Sanders, a socialist who ran for president as a Democrat, and Mr. Trump, who was once reviled by most members of the party he now leads, relied on online movements to shatter the political status quo.

Why is this all happening now? Clay Shirky, a professor at New York University who has studied the effects of social networks, suggested a few reasons.

One is the ubiquity of Facebook, which has reached a truly epic scale. Last month the company reported that about 1.8 billion people now log on to the service every month. Because social networks feed off the various permutations of interactions among people, they become strikingly more powerful as they grow. With about a quarter of the world’s population now on Facebook, the possibilities are staggering.

“When the technology gets boring, that’s when the crazy social effects get interesting,” Mr. Shirky said.

One of those social effects is what Mr. Shirky calls the “shifting of the Overton Window,” a term coined by the researcher Joseph P. Overton to describe the range of subjects that the mainstream media deems publicly acceptable to discuss.

From about the early 1980s until the very recent past, it was usually considered unwise for politicians to court views deemed by most of society to be out of the mainstream, things like overt calls to racial bias (there were exceptions, of course, like the Willie Horton ad). But the internet shifted that window.

“White ethno nationalism was kept at bay because of pluralistic ignorance,”Mr. Shirky said. “Every person who was sitting in their basement yelling at the TV about immigrants or was willing to say white Christians were more American than other kinds of Americans — they didn’t know how many others shared their views.”

Thanks to the internet, now each person with once-maligned views can see that he’s not alone. And when these people find one another, they can do things — create memes, publications and entire online worlds that bolster their worldview, and then break into the mainstream. The groups also become ready targets for political figures like Mr. Trump, who recognize their energy and enthusiasm and tap into it for real-world victories.

Mr. Shirky notes that the Overton Window isn’t just shifting on the right. We see it happening on the left, too. Mr. Sanders campaigned on an anti-Wall Street platform that would have been unthinkable for a Democrat just a decade ago….(More)”

Shareveillance: Subjectivity between open and closed data


Clare Birchall in Big Data and Society: “This article attempts to question modes of sharing and watching to rethink political subjectivity beyond that which is enabled and enforced by the current data regime. It identifies and examines a ‘shareveillant’ subjectivity: a form configured by the sharing and watching that subjects have to withstand and enact in the contemporary data assemblage. Looking at government open and closed data as case studies, this article demonstrates how ‘shareveillance’ produces an anti-political role for the public. In describing shareveillance as, after Jacques Rancière, a distribution of the (digital) sensible, this article posits a politico-ethical injunction to cut into the share and flow of data in order to arrange a more enabling assemblage of data and its affects. In order to interrupt shareveillance, this article borrows a concept from Édouard Glissant and his concern with raced otherness to imagine what a ‘right to opacity’ might mean in the digital context. To assert this right is not to endorse the individual subject in her sovereignty and solitude, but rather to imagine a collective political subjectivity and relationality according to the important question of what it means to ‘share well’ beyond the veillant expectations of the state.

Two questions dominate current debates at the intersection of privacy, governance, security, and transparency: How much, and what kind of data should citizens have to share with surveillant states? And: How much data from government departments should states share with citizens? Yet, these issues are rarely expressed in terms of ‘sharing’ in the way that I will be doing in this article. More often, when thought in tandem with the digital, ‘sharing’ is used in reference to either free trials of software (‘shareware’); the practice of peer-to-peer file sharing; platforms that facilitate the pooling, borrowing, swapping, renting, or selling of resources, skills, and assets that have come to be known as the ‘sharing economy’; or the business of linking and liking on social media, which invites us to share our feelings, preferences, thoughts, interests, photographs, articles, and web links. Sharing in the digital context has been framed as a form of exchange, then, but also communication and distribution (see John, 2013; Wittel, 2011).

In order to understand the politics of open and opaque government data practices, which either share with citizens or ask citizens to share, I will extend existing commentaries on the distributive qualities of sharing by drawing on Jacques Rancière’s notion of the ‘distribution of the sensible’ (2004a) – a settlement that determines what is visible, audible, sayable, knowable and what share or role we each have within it. In the process, I articulate ‘sharing’ with ‘veillance’ (veiller ‘to watch’ is from the Latin vigilare, from vigil, ‘watchful’) to turn the focus from prevalent ways of understanding digital sharing towards a form of contemporary subjectivity. What I call ‘shareveillance’ – a state in which we are always already sharing; indeed, in which any relationship with data is only made possible through a conditional idea of sharing – produces an anti-politicised public caught between different data practices.

I will argue that both open and opaque government data initiatives involve, albeit differently pitched, forms of sharing and veillance. Government practices that share data with citizens involve veillance because they call on citizens to monitor and act upon that data – we are envisioned (‘veiled’ and hailed) as auditing and entrepreneurial subjects. Citizens have to monitor the state’s data, that is, or they are expected to innovate with it and make it profitable. Data sharing therefore apportions responsibility without power. It watches citizens watching the state, delimiting the ways in which citizens can engage with that data and, therefore, the scope of the political per se….(More)”.

The internet is crowdsourcing ways to drain the fake news swamp


 at CNET: “Fighting the scourge of fake news online is one of the unexpected new crusades emerging from the fallout of Donald Trump’s upset presidential election win last week. Not surprisingly, the internet has no shortage of ideas for how to get its own house in order.

Eli Pariser, author of the seminal book “The Filter Bubble”that pre-saged some of the consequences of online platforms that tend to sequester users into non-overlapping ideological silos, is leading an inspired brainstorming effort via this open Google Doc.

Pariser put out the public call to collaborate via Twitter on Thursday and within 24 hours 21 pages worth of bullet-pointed suggestions has already piled up in the doc….

Suggestions ranged from the common call for news aggregators and social media platforms to hire more human editors, to launching more media literacy programs or creating “credibility scores” for shared content and/or users who share or report fake news.

Many of the suggestions are aimed at Facebook, which has taken a heavy heaping of criticism since the election and a recent report that found the top fake election news stories saw more engagement on Facebook than the top real election stories….

In addition to the crowdsourced brainstorming approach, plenty of others are chiming in with possible solutions. Author, blogger and journalism professor Jeff Jarvis teamed up with entrepreneur and investor John Borthwick of Betaworks to lay out 15 concrete ideas for addressing fake news on Medium ….The Trust Project at Santa Clara University is working to develop solutions to attack fake news that include systems for author verification and citations….(More)

Between Governance of the Past and Technology of the Future


Think Piece by Heather Grabbe for ESPAS 2016 conference: ” In many parts of everyday life, voters are used to a consumer experience where they get instant feedback and personal participation; but party membership, ballot boxes and stump speeches do not offer the same speed, control or personal engagement. The institutions of representative democracy at national and EU level — political parties, elected members, law-making — do not offer the same quality of experience for their ultimate consumers.

This matters because it is causing voters to switch off. Broad participation by most of the population in the practice of democracy is vital for societies to remain open because it ensures pluralism and prevents takeover of power by narrow interests. But in some countries and some elections, turnout is regularly below a third of registered voters, especially in European Parliament elections.

The internet is driving the major trends that create this disconnection and disruption. Here are four vital areas in which politics should adapt, including at EU level:

  • Expectation. Voters have a growing sense that political parties and law-making are out of touch, but not that politics is irrelevant. …
  • Affiliation. … people are interested in new forms of affiliation, especially through social media and alternative networks. …
  • Location. Digital technology allows people to find myriad new ways to express their political views publicly, outside of formal political spaces. …
  • Information. The internet has made vast amounts of data and a huge range of information sources across an enormous spectrum of issues available to every human with an internet connection. How is this information overload affecting engagement with politics? ….(More)”

Is Social Media Killing Democracy?


Phil Howard at Culture Digitally: “This is the big year for computational propaganda—using immense data sets to manipulate public opinion over social media.  Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits. 

Platforms like Twitter and Facebook now provide a structure for our political lives.  We’ve always relied on many kinds of sources for our political news and information.  Family, friends, news organizations, charismatic politicians certainly predate the internet.  But whereas those are sources of information, social media now provides the structure for political conversation.  And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.

First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends.  …

Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”…

The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to….

Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends.  Whereas journalists and social scientists feel an obligation to openly analyze and discuss public preferences, we do not expect this of Facebook.  The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook.  When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda.  The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies.  Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work.  And we should expect them to deliberate about their editorial decisions.

There are some ways to fix these problems.  Opaque software algorithms shape what people find in their news feeds.  We’ve all noticed fake news stories, often called clickbait, and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion.  These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US Presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump.  Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention. 

So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms….(More)”.

The Participatory Condition in the Digital Age


Book edited by Darin Barney, Gabriella Coleman, Christine Ross, Jonathan Sterne, and Tamar Tembeck:

The Participatory Condition in the Digital Age

“Just what is the “participatory condition”? It is the situation in which taking part in something with others has become both environmental and normative. The fact that we have always participated does not mean we have always lived under the participatory condition. What is distinctive about the present is the extent to which the everyday social, economic, cultural, and political activities that comprise simply being in the world have been thematized and organized around the priority of participation.

Structured along four axes investigating the relations between participation and politics, surveillance, openness, and aesthetics, The Participatory Condition in the Digital Age comprises fifteen essays that explore the promises, possibilities, and failures of contemporary participatory media practices as related to power, Occupy Wall Street, the Arab Spring uprisings, worker-owned cooperatives for the post-Internet age; paradoxes of participation, media activism, open source projects; participatory civic life; commercial surveillance; contemporary art and design; and education.

This book represents the most comprehensive and transdisciplinary endeavor to date to examine the nature, place, and value of participation in the digital age. Just as in 1979, when Jean-François Lyotard proposed that “the postmodern condition” was characterized by the questioning of historical grand narratives, The Participatory Condition in the Digital Age investigates how participation has become a central preoccupation of our time….(More)”

Big Data Is Not a Monolith


Book edited by Cassidy R. Sugimoto, Hamid R. Ekbia and Michael Mattioli: “Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies.

The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data’s ramifications. The contributors look at big data’s effect on individuals as it exerts social control through monitoring, mining, and manipulation; big data and society, examining both its empowering and its constraining effects; big data and science, considering issues of data governance, provenance, reuse, and trust; and big data and organizations, discussing data responsibility, “data harm,” and decision making….(More)”

Improving Services—At What Cost? Examining the Ethics of Twitter Research


Case study by Sara Mannheimer, Scott W. H. Young and Doralyn Rossmann: “As social media use has become widespread, academic and corporate researchers have identified social networking services as sources of detailed information about people’s viewpoints and behaviors. Social media users share thoughts, have conversations, and build communities in open, online spaces, and researchers analyze social media data for a variety of purposes—from tracking the spread of disease (Lampos & Cristianini, 2010) to conducting market research (Patino, Pitta, & Quinones, 2012; Hornikx & Hendriks, 2015) to forecasting elections (Tumasjan et al., 2010). Twitter in particular has emerged as a leading platform for social media research, partly because user data from non-private Twitter accounts is openly accessible via an application programming interface (API). This case study describes research conducted by Montana State University (MSU) librarians to analyze the MSU Library’s Twitter community, and the ethical questions that we encountered over the course of the research. The case study will walk through our Twitter research at the MSU Library, and then suggest discussion questions to frame an ethical conversation surrounding social media research. We offer a number of areas of ethical inquiry that we recommend be engaged with as a cohesive whole….(More)”.