More)”
I conduct an experiment which examines the impact of group norm promotion and social sanctioning on racist online harassment. Racist online harassment de-mobilizes the minorities it targets, and the open, unopposed expression of racism in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur. This paper extends findings from lab experiments to a naturalistic setting using an objective, behavioral outcome measure and a continuous 2-month data collection period. This represents an advance in the study of prejudiced behavior….(Shareveillance: Subjectivity between open and closed data
Clare Birchall in Big Data and Society: “This article attempts to question modes of sharing and watching to rethink political subjectivity beyond that which is enabled and enforced by the current data regime. It identifies and examines a ‘shareveillant’ subjectivity: a form configured by the sharing and watching that subjects have to withstand and enact in the contemporary data assemblage. Looking at government open and closed data as case studies, this article demonstrates how ‘shareveillance’ produces an anti-political role for the public. In describing shareveillance as, after Jacques Rancière, a distribution of the (digital) sensible, this article posits a politico-ethical injunction to cut into the share and flow of data in order to arrange a more enabling assemblage of data and its affects. In order to interrupt shareveillance, this article borrows a concept from Édouard Glissant and his concern with raced otherness to imagine what a ‘right to opacity’ might mean in the digital context. To assert this right is not to endorse the individual subject in her sovereignty and solitude, but rather to imagine a collective political subjectivity and relationality according to the important question of what it means to ‘share well’ beyond the veillant expectations of the state.
Two questions dominate current debates at the intersection of privacy, governance, security, and transparency: How much, and what kind of data should citizens have to share with surveillant states? And: How much data from government departments should states share with citizens? Yet, these issues are rarely expressed in terms of ‘sharing’ in the way that I will be doing in this article. More often, when thought in tandem with the digital, ‘sharing’ is used in reference to either free trials of software (‘shareware’); the practice of peer-to-peer file sharing; platforms that facilitate the pooling, borrowing, swapping, renting, or selling of resources, skills, and assets that have come to be known as the ‘sharing economy’; or the business of linking and liking on social media, which invites us to share our feelings, preferences, thoughts, interests, photographs, articles, and web links. Sharing in the digital context has been framed as a form of exchange, then, but also communication and distribution (see John, 2013; Wittel, 2011).
In order to understand the politics of open and opaque government data practices, which either share with citizens or ask citizens to share, I will extend existing commentaries on the distributive qualities of sharing by drawing on Jacques Rancière’s notion of the ‘distribution of the sensible’ (2004a) – a settlement that determines what is visible, audible, sayable, knowable and what share or role we each have within it. In the process, I articulate ‘sharing’ with ‘veillance’ (veiller ‘to watch’ is from the Latin vigilare, from vigil, ‘watchful’) to turn the focus from prevalent ways of understanding digital sharing towards a form of contemporary subjectivity. What I call ‘shareveillance’ – a state in which we are always already sharing; indeed, in which any relationship with data is only made possible through a conditional idea of sharing – produces an anti-politicised public caught between different data practices.
I will argue that both open and opaque government data initiatives involve, albeit differently pitched, forms of sharing and veillance. Government practices that share data with citizens involve veillance because they call on citizens to monitor and act upon that data – we are envisioned (‘veiled’ and hailed) as auditing and entrepreneurial subjects. Citizens have to monitor the state’s data, that is, or they are expected to innovate with it and make it profitable. Data sharing therefore apportions responsibility without power. It watches citizens watching the state, delimiting the ways in which citizens can engage with that data and, therefore, the scope of the political per se….(More)”.
The internet is crowdsourcing ways to drain the fake news swamp
Eric Mack at CNET: “Fighting the scourge of fake news online is one of the unexpected new crusades emerging from the fallout of Donald Trump’s upset presidential election win last week. Not surprisingly, the internet has no shortage of ideas for how to get its own house in order.
Eli Pariser, author of the seminal book “The Filter Bubble”that pre-saged some of the consequences of online platforms that tend to sequester users into non-overlapping ideological silos, is leading an inspired brainstorming effort via this open Google Doc.
Pariser put out the public call to collaborate via Twitter on Thursday and within 24 hours 21 pages worth of bullet-pointed suggestions has already piled up in the doc….
Suggestions ranged from the common call for news aggregators and social media platforms to hire more human editors, to launching more media literacy programs or creating “credibility scores” for shared content and/or users who share or report fake news.
Many of the suggestions are aimed at Facebook, which has taken a heavy heaping of criticism since the election and a recent report that found the top fake election news stories saw more engagement on Facebook than the top real election stories….
In addition to the crowdsourced brainstorming approach, plenty of others are chiming in with possible solutions. Author, blogger and journalism professor Jeff Jarvis teamed up with entrepreneur and investor John Borthwick of Betaworks to lay out 15 concrete ideas for addressing fake news on Medium ….The Trust Project at Santa Clara University is working to develop solutions to attack fake news that include systems for author verification and citations….(More)
Between Governance of the Past and Technology of the Future
Think Piece by Heather Grabbe for ESPAS 2016 conference: ” In many parts of everyday life, voters are used to a consumer experience where they get instant feedback and personal participation; but party membership, ballot boxes and stump speeches do not offer the same speed, control or personal engagement. The institutions of representative democracy at national and EU level — political parties, elected members, law-making — do not offer the same quality of experience for their ultimate consumers.
This matters because it is causing voters to switch off. Broad participation by most of the population in the practice of democracy is vital for societies to remain open because it ensures pluralism and prevents takeover of power by narrow interests. But in some countries and some elections, turnout is regularly below a third of registered voters, especially in European Parliament elections.
The internet is driving the major trends that create this disconnection and disruption. Here are four vital areas in which politics should adapt, including at EU level:
- Expectation. Voters have a growing sense that political parties and law-making are out of touch, but not that politics is irrelevant. …
- Affiliation. … people are interested in new forms of affiliation, especially through social media and alternative networks. …
- Location. Digital technology allows people to find myriad new ways to express their political views publicly, outside of formal political spaces. …
- Information. The internet has made vast amounts of data and a huge range of information sources across an enormous spectrum of issues available to every human with an internet connection. How is this information overload affecting engagement with politics? ….(More)”
Is Social Media Killing Democracy?
Phil Howard at Culture Digitally: “This is the big year for computational propaganda—using immense data sets to manipulate public opinion over social media. Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits.
Platforms like Twitter and Facebook now provide a structure for our political lives. We’ve always relied on many kinds of sources for our political news and information. Family, friends, news organizations, charismatic politicians certainly predate the internet. But whereas those are sources of information, social media now provides the structure for political conversation. And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.
First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends. …
Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”…
The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to….
Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends. Whereas journalists and social scientists feel an obligation to openly analyze and discuss public preferences, we do not expect this of Facebook. The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook. When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda. The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies. Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work. And we should expect them to deliberate about their editorial decisions.
There are some ways to fix these problems. Opaque software algorithms shape what people find in their news feeds. We’ve all noticed fake news stories, often called clickbait, and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion. These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US Presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump. Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention.
So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms….(More)”.
Portugal has announced the world’s first nationwide participatory budget
Graça Fonseca at apolitical:”Portugal has announced the world’s first participatory budget on a national scale. The project will let people submit ideas for what the government should spend its money on, and then vote on which ideas are adopted.
Although participatory budgeting has become increasingly popular around the world in the past few years, it has so far been confined to cities and regions, and no country that we know of has attempted it nationwide. To reach as many people as possible, Portugal is also examining another innovation: letting people cast their votes via ATM machines.
‘It’s about quality of life, it’s about the quality of public space, it’s about the quality of life for your children, it’s about your life, OK?’ Graça Fonseca, the minister responsible, told Apolitical. ‘And you have a huge deficit of trust between people and the institutions of democracy. That’s the point we’re starting from and, if you look around, Portugal is not an exception in that among Western societies. We need to build that trust and, in my opinion, it’s urgent. If you don’t do anything, in ten, twenty years you’ll have serious problems.’
Although the official window for proposals begins in January, some have already been submitted to the project’s website. One suggests equipping kindergartens with technology to teach children about robotics. Using the open-source platform Arduino, the plan is to let children play with the tech and so foster scientific understanding from the earliest age.
Proposals can be made in the areas of science, culture, agriculture and lifelong learning, and there will be more than forty events in the new year for people to present and discuss their ideas.
The organisers hope that it will go some way to restoring closer contact between government and its citizens. Previous projects have shown that people who don’t vote in general elections often do cast their ballot on the specific proposals that participatory budgeting entails. Moreover, those who make the proposals often become passionate about them, campaigning for votes, flyering, making YouTube videos, going door-to-door and so fuelling a public discussion that involves ever more people in the process.
On the other side, it can bring public servants nearer to their fellow citizens by sharpening their understanding of what people want and what their priorities are. It can also raise the quality of public services by directing them more precisely to where they’re needed as well as by tapping the collective intelligence and imagination of thousands of participants….
Although it will not be used this year, because the project is still very much in the trial phase, the use of ATMs is potentially revolutionary. As Fonseca puts it, ‘In every remote part of the country, you might have nothing else, but you have an ATM.’ Moreover, an ATM could display proposals and allow people to vote directly, not least because it already contains a secure way of verifying their identity. At the moment, for comparison, people can vote by text or online, sending in the number from their ID card, which is checked against a database….(More)”.
Wikipedia’s not as biased as you might think
Ananya Bhattacharya in Quartz: “The internet is as open as people make it. Often, people limit their Facebook and Twitter circles to likeminded people and only follow certain subreddits, blogs, and news sites, creating an echo chamber of sorts. In a sea of biased content, Wikipedia is one of the few online outlets that strives for neutrality. After 15 years in operation, it’s starting to see results
Researchers at Harvard Business School evaluated almost 4,000 articles in Wikipedia’s online database against the same entries in Encyclopedia Brittanica to compare their biases. They focused on English-language articles about US politics, especially controversial topics, that appeared in both outlets in 2012.
“That is just not a recipe for coming to a conclusion,” Shane Greenstein, one of the study’s authors, said in an interview. “We were surprised that Wikipedia had not failed, had not fallen apart in the last several years.”
Greenstein and his co-author Feng Zhu categorized each article as “blue” or “red.” Drawing from research in political science, they identified terms that are idiosyncratic to each party. For instance, political scientists have identified that Democrats were more likely to use phrases such as “war in Iraq,” “civil rights,” and “trade deficit,” while Republicans used phrases such as “economic growth,” “illegal immigration,” and “border security.”…
“In comparison to expert-based knowledge, collective intelligence does not aggravate the bias of online content when articles are substantially revised,” the authors wrote in the paper. “This is consistent with a best-case scenario in which contributors with different ideologies appear to engage in fruitful online conversations with each other, in contrast to findings from offline settings.”
More surprisingly, the authors found that the 2.8 million registered volunteer editors who were reviewing the articles also became less biased over time. “You can ask questions like ‘do editors with red tendencies tend to go to red articles or blue articles?’” Greenstein said. “You find a prevalence of opposites attract, and that was striking.” The researchers even identified the political stance for a number of anonymous editors based on their IP locations, and the trend held steadfast….(More)”
The People’s Code – Now on Code.gov
Over the past few years, we’ve taken unprecedented action to help Americans engage with their Government in new and meaningful ways.
Using Vote.gov, citizens can now quickly navigate their state’s voter registration process through an easy-to-use site. Veterans can go to Vets.gov to discover, apply for, track and manage their benefits in one, user-friendly place. And for the first time ever, citizens can send a note to President Obama simply by messaging the White House on Facebook.
By harnessing 21st Century technology and innovation, we’re improving the Federal Government’s ability to provide better citizen-centered services and are making the Federal Government smarter, savvier, and more effective for the American people. At the same time, we’re building many of these new digital tools, such as We the People, the White House Facebook bot, and Data.gov, in the open so that as the Government uses technology to re-imagine and improve the way people interact with it, others can too.
The code for these platforms is, after all, the People’s Code – and today we’re excited to announce that it’ll be accessible from one place, Code.gov, for the American people to explore, improve, and innovate.
The launch of Code.gov comes on the heels of the release of the Federal Source Code Policy, which seeks to further improve access to the Federal Government’s custom-developed software. It’s a step we took to help Federal agencies avoid duplicative custom software purchases and promote innovation and cross-agency collaboration. And it’s a step we took to enable the brightest minds inside and outside of government to work together to ensure that Federal code is reliable and effective.
Built in the open, the newly-launched Code.gov already boasts access to nearly 50 open source projects from over 10 agencies – and we expect this number to grow over the coming months as agencies work to implement the Federal Source Code Policy. Further, Code.gov will provide useful tools and best practices to help agencies implement the new policy. For example, starting today agencies can begin populating their enterprise code inventories using the metadata schema on Code.gov, discover various methods on how to build successful open source projects, and much more….(More)”
The Age of Sharing
But the word ‘sharing’ also camouflages commercial or even exploitative relations.Websites say they share data with advertisers, although in reality they sell it, while parts of the sharing economy look a great deal like rental services. Ultimately, it is argued, practices described as sharing and critiques of those practices have common roots. Consequently, the metaphor of sharing now constructs significant swathes of our social practices and provides the grounds for critiquing them; it is a mode of participation in the capitalist order as well as a way of resisting it.
Drawing on nineteenth-century literature, Alcoholics Anonymous, the American counterculture, reality TV, hackers, Airbnb, Facebook and more, The Age of Sharing offers a rich account of a complex contemporary keyword. It will appeal to students and scholars of the Internet, digital culture and linguistics….(More)”
Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
“The advent of social apps, smart phones and ubiquitous computing has brought a great transformation to our day-to-day life. The incredible pace with which the new and disruptive services continue to emerge challenges our perception of privacy. To keep apace with this rapidly evolving cyber reality, we need to devise agile methods and frameworks for developing privacy-preserving systems that align with evolving user’s privacy expectations.
Previous efforts have tackled this with the assumption that privacy norms are provided through existing sources such law, privacy regulations and legal precedents. They have focused on formally expressing privacy norms and devising a corresponding logic to enable automatic inconsistency checks and efficient enforcement of the logic.
However, because many of the existing regulations and privacy handbooks were enacted well before the Internet revolution took place, they often lag behind and do not adequately reflect the application of logic in modern systems. For example, the Family Rights and Privacy Act (FERPA) was enacted in 1974, long before Facebook, Google and many other online applications were used in an educational context. More recent legislation faces similar challenges as novel services introduce new ways to exchange information, and consequently shape new, unconsidered information flows that can change our collective perception of privacy.
Crowdsourcing Contextual Privacy Norms
Armed with the theory of Contextual Integrity (CI) in our work, we are exploring ways to uncover societal norms by leveraging the advances in crowdsourcing technology.
In our recent paper, we present the methodology that we believe can be used to extract a societal notion of privacy expectations. The results can be used to fine tune the existing privacy guidelines as well as get a better perspective on the users’ expectations of privacy.
CI defines privacy as collection of norms (privacy rules) that reflect appropriate information flows between different actors. Norms capture who shares what, with whom, in what role, and under which conditions. For example, while you are comfortable sharing your medical information with your doctor, you might be less inclined to do so with your colleagues.
We use CI as a proxy to reason about privacy in the digital world and a gateway to understanding how people perceive privacy in a systematic way. Crowdsourcing is a great tool for this method. We are able to ask hundreds of people how they feel about a particular information flow, and then we can capture their input and map it directly onto the CI parameters. We used a simple template to write Yes-or-No questions to ask our crowdsourcing participants:
“Is it acceptable for the [sender] to share the [subject’s] [attribute] with [recipient] [transmission principle]?”
For example:
“Is it acceptable for the student’s professor to share the student’s record of attendance with the department chair if the student is performing poorly? ”
In our experiments, we leveraged Amazon’s Mechanical Turk (AMT) to ask 450 turkers over 1400 such questions. Each question represents a specific contextual information flow that users can approve, disapprove or mark under the Doesn’t Make Sense category; the last category could be used when 1) the sender is unlikely to have the information, 2) the receiver would already have the information, or 3) the question is ambiguous….(More)”