How urban design can make or break protests


Peter Schwartzstein in Smithsonian Magazine: “If protesters could plan a perfect stage to voice their grievances, it might look a lot like Athens, Greece. Its broad, yet not overly long, central boulevards are almost tailor-made for parading. Its large parliament-facing square, Syntagma, forms a natural focal point for marchers. With a warren of narrow streets surrounding the center, including the rebellious district of Exarcheia, it’s often remarkably easy for demonstrators to steal away if the going gets rough.

Los Angeles, by contrast, is a disaster for protesters. It has no wholly recognizable center, few walkable distances, and little in the way of protest-friendly space. As far as longtime city activists are concerned, just amassing small crowds can be an achievement. “There’s really just no place to go, the city is structured in a way that you’re in a city but you’re not in a city,” says David Adler, general coordinator at the Progressive International, a new global political group. “While a protest is the coming together of a large group of people and that’s just counter to the idea of L.A.”

Among the complex medley of moving parts that guide protest movements, urban design might seem like a fairly peripheral concern. But try telling that to demonstrators from Houston to Beijing, two cities that have geographic characteristics that complicate public protest. Low urban density can thwart mass participation. Limited public space can deprive protesters of the visibility and hence the momentum they need to sustain themselves. On those occasions when proceedings turn messy or violent, alleyways, parks, and labyrinthine apartment buildings can mean the difference between detention and escape….(More)”.

Digital diplomacy: States go online


Philipp Grüll at Euractiv: “When Germany takes over the European Council Presidency on 1 July, Berlin will have plenty to do. The draft programme seen by EURACTIV Germany focuses on the major challenges of our time: climate change, digitisation, and the coronavirus.

Berlin wants to establish ‘European Digital Diplomacy’ by creating a ‘Digital Diplomacy Network’ to exist alongside the ‘Technospheres USA and China’.

This should not only be about keeping European industries competitive. After all, the term “digital diplomacy” is not new.

Ilan Manor, a researcher at Oxford University and author of numerous papers on digital diplomacy, defines it as “the use of digital tools to achieve foreign policy goals.”

This definition is intentionally broad, Manor told EURACTIV Germany, because technology can be used in so many areas of international relations….

Manor divides the development of this digital public diplomacy into two phases.

In the first one, from 2008 to 2015, governments took the first cautious steps. They experimented and launched random and often directionless online activities. Foreign ministries and embassies set up social media accounts. Sweden opened a virtual embassy in the online video game “Second Life.”

It was only in the second phase, from 2015 to the present, that foreign ministries began to act more strategically. They used “Big Data” to record public opinion in other countries, and also to track down online propaganda against their own country.

As an example, Manor cites the Russian embassy in the United Kingdom, which is said to have deliberately disseminated anti-EU narratives prior to the Brexit referendum, packaged in funny and seemingly innocent Internet memes that spread rapidly….(More)”.

Open Data from Authoritarian Regimes: New Opportunities, New Challenges


Paper by Ruth D. Carlitz and Rachael McLellan: “Data availability has long been a challenge for scholars of authoritarian politics. However, the promotion of open government data—through voluntary initiatives such as the Open Government Partnership and soft conditionalities tied to foreign aid—has motivated many of the world’s more closed regimes to produce and publish fine-grained data on public goods provision, taxation, and more. While this has been a boon to scholars of autocracies, we argue that the politics of data production and dissemination in these countries create new challenges.

Systematically missing or biased data may jeopardize research integrity and lead to false inferences. We provide evidence of such risks from Tanzania. The example also shows how data manipulation fits into the broader set of strategies that authoritarian leaders use to legitimate and prolong their rule. Comparing data released to the public on local tax revenues with verified internal figures, we find that the public data appear to significantly underestimate opposition performance. This can bias studies on local government capacity and risk parroting the party line in data form. We conclude by providing a framework that researchers can use to anticipate and detect manipulation in newly available data….(More)”.

The “Social” Side of Big Data: Teaching BD Analytics to Political Science Students


Case report by Giampiero Giacomello and Oltion Preka: “In an increasingly technology-dependent world, it is not surprising that STEM (Science, Technology, Engineering, and Mathematics) graduates are in high demand. This state of affairs, however, has made the public overlook the case that not only computing and artificial intelligence are naturally interdisciplinary, but that a huge portion of generated data comes from human–computer interactions, thus they are social in character and nature. Hence, social science practitioners should be in demand too, but this does not seem the case. One of the reasons for such a situation is that political and social science departments worldwide tend to remain in their “comfort zone” and see their disciplines quite traditionally, but by doing so they cut themselves off from many positions today. The authors believed that these conditions should and could be changed and thus in a few years created a specifically tailored course for students in Political Science. This paper examines the experience of the last year of such a program, which, after several tweaks and adjustments, is now fully operational. The results and students’ appreciation are quite remarkable. Hence the authors considered the experience was worth sharing, so that colleagues in social and political science departments may feel encouraged to follow and replicate such an example….(More)”

Conceptualizing the Impact of Digital Interference in Elections: A Framework and Agenda for Future Research


Paper by Nahema Marchal: “Concerns over digital interference in elections are widespread. Yet evidence of its impact is still thin and fragmented. How do malicious uses of social media shape, transform, and distort democratic processes? And how should we characterize this impact? Existing research into the effects of social media manipulation has largely focused on measuring its purported impact on opinion swings and voting behavior. Though laudable, this focus might be too reductive. Drawing on normative theories of liberal democracy, in this paper I argue that the threat of digital interference does not lie in its capacity to change people’s views but rather in its power to undermine popular perceptions of electoral integrity, with potentially far-reaching consequences for public trust. Following this assessment, I formulate a preliminary research agenda and highlight previously overlooked relationships that could be explored to better understand how malicious uses of social media might shape such attitudes and to what effect….(More)”.

New privacy-protected Facebook data for independent research on social media’s impact on democracy


Chaya Nayak at Facebook: “In 2018, Facebook began an initiative to support independent academic research on social media’s role in elections and democracy. This first-of-its-kind project seeks to provide researchers access to privacy-preserving data sets in order to support research on these important topics.

Today, we are announcing that we have substantially increased the amount of data we’re providing to 60 academic researchers across 17 labs and 30 universities around the world. This release delivers on the commitment we made in July 2018 to share a data set that enables researchers to study information and misinformation on Facebook, while also ensuring that we protect the privacy of our users.

This new data release supplants data we released in the fall of 2019. That 2019 data set consisted of links that had been shared publicly on Facebook by at least 100 unique Facebook users. It included information about share counts, ratings by Facebook’s third-party fact-checkers, and user reporting on spam, hate speech, and false news associated with those links. We have expanded the data set to now include more than 38 million unique links with new aggregated information to help academic researchers analyze how many people saw these links on Facebook and how they interacted with that content – including views, clicks, shares, likes, and other reactions. We’ve also aggregated these shares by age, gender, country, and month. And, we have expanded the time frame covered by the data from January 2017 – February 2019 to January 2017 – August 2019.

With this data, researchers will be able to understand important aspects of how social media shapes our world. They’ll be able to make progress on the research questions they proposed, such as “how to characterize mainstream and non-mainstream online news sources in social media” and “studying polarization, misinformation, and manipulation across multiple platforms and the larger information ecosystem.”

In addition to the data set of URLs, researchers will continue to have access to CrowdTangle and Facebook’s Ad Library API to augment their analyses. Per the original plan for this project, outside of a limited review to ensure that no confidential or user data is inadvertently released, these researchers will be able to publish their findings without approval from Facebook.

We are sharing this data with researchers while continuing to prioritize the privacy of people who use our services. This new data set, like the data we released before it, is protected by a method known as differential privacy. Researchers have access to data tables from which they can learn about aggregated groups, but where they cannot identify any individual user. As Harvard University’s Privacy Tools project puts it:

“The guarantee of a differentially private algorithm is that its behavior hardly changes when a single individual joins or leaves the dataset — anything the algorithm might output on a database containing some individual’s information is almost as likely to have come from a database without that individual’s information. … This gives a formal guarantee that individual-level information about participants in the database is not leaked.” …(More)”

Data-driven elections


Introduction to Special Issue of Internet Policy Review by Colin J. Bennett and David Lyon: “There is a pervasive assumption that elections can be won and lost on the basis of which candidate or party has the better data on the preferences and behaviour of the electorate. But there are myths and realities about data-driven elections. I

t is time to assess the actual implications of data-driven elections in the light of the Facebook/Cambridge Analytica scandal, and to reconsider the broader terms of the international debate. Political micro-targeting, and the voter analytics upon which it is based, are essentially forms of surveillance. We know a lot about how surveillance harms democratic values. We know a lot less, however, about how surveillance spreads as a result of democratic practices – by the agents and organisations that encourage us to vote (or not vote).

The articles in this collection, developed out of a workshop hosted by the Office of the Information and Privacy Commissioner for British Columbia in April 2019, address the most central issues about data-driven elections, and particularly the impact of US social media platforms on local political institutions and cultures. The balance between rights to privacy, and the rights of political actors to communicate with the electorate, is struck in different ways in different jurisdictions depending on a complex interplay of various legal, political, and cultural factors. Collectively, the articles in this collection signal the necessary questions for academics and regulators in the years ahead….(More)”.

Lies, Deception and Democracy


Essay by Richard Bellamy: “This essay explores how far democracy is compatible with lies and deception, and whether it encourages or discourages their use by politicians. Neo-Kantian arguments, such as Newey’s, that lies and deception undermine individual autonomy and the possibility for consent go too far, given that no democratic process can be regarded as a plausible mechanism for achieving collective consent to state policies. However, they can be regarded as incompatible with a more modest account of democracy as a system of public equality among political equals.

On this view, the problem with lies and deception derives from their being instruments of manipulation and domination. Both can be distinguished from ‘spin’, with a working democracy being capable of uncovering them and so incentivising politicians to be truthful. Nevertheless, while lies and deception will find you out, bullshit and post truth disregard and subvert truth respectively, and as such prove more pernicious as they admit of no standard whereby they might be challenged….(More)”.

On Digital Disinformation and Democratic Myths


 David Karpf at MediaWell: “…How many votes did Cambridge Analytica affect in the 2016 presidential election? How much of a difference did the company actually make?

Cambridge Analytica has become something of a Rorschach test among those who pay attention to digital disinformation and microtargeted propaganda. Some hail the company as a digital Svengali, harnessing the power of big data to reshape the behavior of the American electorate. Others suggest the company was peddling digital snake oil, with outlandish marketing claims that bore little resemblance to their mundane product.

One thing is certain: the company has become a household name, practically synonymous with disinformation and digital propaganda in the aftermath of the 2016 election. It has claimed credit for the surprising success of the Brexit referendum and for the Trump digital strategy. Journalists such as Carole Cadwalladr and Hannes Grasseger and Mikael Krogerus have published longform articles that dive into the “psychographic” breakthroughs that the company claims to have made. Cadwalladr also exposed the links between the company and a network of influential conservative donors and political operatives. Whistleblower Chris Wylie, who worked for a time as the company’s head of research, further detailed how it obtained a massive trove of Facebook data on tens of millions of American citizens, in violation of Facebook’s terms of service. The Cambridge Analytica scandal has been a driving force in the current “techlash,” and has been the topic of congressional hearings, documentaries, mass-market books, and scholarly articles.

The reasons for concern are numerous. The company’s own marketing materials boasted about radical breakthroughs in psychographic targeting—developing psychological profiles of every US voter so that political campaigns could tailor messages to exploit psychological vulnerabilities. Those marketing claims were paired with disturbing revelations about the company violating Facebook’s terms of service to scrape tens of millions of user profiles, which were then compiled into a broader database of US voters. Cambridge Analytica behaved unethically. It either broke a lot of laws or demonstrated that old laws needed updating. When the company shut down, no one seemed to shed a tear.

But what is less clear is just how different Cambridge Analytica’s product actually was from the type of microtargeted digital advertisements that every other US electoral campaign uses. Many of the most prominent researchers warning the public about how Cambridge Analytica uses our digital exhaust to “hack our brains” are marketing professors, more accustomed to studying the impact of advertising in commerce than in elections. The political science research community has been far more skeptical. An investigation from Nature magazine documented that the evidence of Cambridge Analytica’s independent impact on voter behavior is basically nonexistent (Gibney 2018). There is no evidence that psychographic targeting actually works at the scale of the American electorate, and there is also no evidence that Cambridge Analytica in fact deployed psychographic models while working for the Trump campaign. The company clearly broke Facebook’s terms of service in acquiring its massive Facebook dataset. But it is not clear that the massive dataset made much of a difference.

At issue in the Cambridge Analytica case are two baseline assumptions about political persuasion in elections. First, what should be our point of comparison for digital propaganda in elections? Second, how does political persuasion in elections compare to persuasion in commercial arenas and marketing in general?…(More)”.

Peopling Europe through Data Practices


Introduction to Special Issue of Science, Technology & Human Values by Baki Cakici, Evelyn Ruppert and Stephan Scheel: “Politically, Europe has been unable to address itself to a constituted polity and people as more than an agglomeration of nation-states. From the resurgence of nationalisms to the crisis of the single currency and the unprecedented decision of a member state to leave the European Union (EU), core questions about the future of Europe have been rearticulated: Who are the people of Europe? Is there a European identity? What does it mean to say, “I am European?” Where does Europe begin and end? and Who can legitimately claim to be a part of a “European” people?

The special issue (SI) seeks to contest dominant framings of the question “Who are the people of Europe?” as only a matter of government policies, electoral campaigns, or parliamentary debates. Instead, the contributions start from the assumption that answers to this question exist in data practices where people are addressed, framed, known, and governed as European. The central argument of this SI is that it is through data practices that the EU seeks to simultaneously constitute its population as a knowable, governable entity, and as a distinct form of peoplehood where common personhood is more important than differences….(More)”.