Political Turbulence: How Social Media Shape Collective Action


Book by Helen Margetts, Peter John, Scott Hale, & Taha Yasseri: “As people spend increasing proportions of their daily lives using social media, such as Twitter and Facebook, they are being invited to support myriad political causes by sharing, liking, endorsing, or downloading. Chain reactions caused by these tiny acts of participation form a growing part of collective action today, from neighborhood campaigns to global political movements. Political Turbulence reveals that, in fact, most attempts at collective action online do not succeed, but some give rise to huge mobilizations—even revolutions.

Drawing on large-scale data generated from the Internet and real-world events, this book shows how mobilizations that succeed are unpredictable, unstable, and often unsustainable. To better understand this unruly new force in the political world, the authors use experiments that test how social media influence citizens deciding whether or not to participate. They show how different personality types react to social influences and identify which types of people are willing to participate at an early stage in a mobilization when there are few supporters or signals of viability. The authors argue that pluralism is the model of democracy that is emerging in the social media age—not the ordered, organized vision of early pluralists, but a chaotic, turbulent form of politics.

This book demonstrates how data science and experimentation with social data can provide a methodological toolkit for understanding, shaping, and perhaps even predicting the outcomes of this democratic turbulence….(More)”

Open Government: Missing Questions


Vadym Pyrozhenko at Administration & Society: “This article places the Obama administration’s open government initiative within the context of evolution of the U.S. information society. It examines the concept of openness along the three dimensions of Daniel Bell’s social analysis of the postindustrial society: structure, polity, and culture. Four “missing questions” raise the challenge of the compatibility of public service values with the culture of openness, address the right balance between postindustrial information management practices and the capacity of public organizations to accomplish their missions, and ask to reconsider the idea that greater structural openness of public organizations will necessarily increase their democratic legitimacy….(More)”

 

Uninformed: Why People Seem to Know So Little about Politics and What We Can Do about It


Book by Arthur Lupia: “Research polls, media interviews, and everyday conversations reveal an unsettling truth: citizens, while well-meaning and even passionate about current affairs, appear to know very little about politics. Hundreds of surveys document vast numbers of citizens answering even basic questions about government incorrectly. Given this unfortunate state of affairs, it is not surprising that more knowledgeable people often deride the public for its ignorance. Some experts even think that less informed citizens should stay out of politics altogether.

As Arthur Lupia shows in Uninformed, this is not constructive. At root, critics of public ignorance fundamentally misunderstand the problem. Many experts believe that simply providing people with more facts will make them more competent voters. However, these experts fail to understand how most people learn, and hence don’t really know what types of information are even relevant to voters. Feeding them information they don’t find relevant does not address the problem. In other words, before educating the public, we need to educate the educators.

Lupia offers not just a critique, though; he also has solutions. Drawing from a variety of areas of research on topics like attention span and political psychology, he shows how we can actually increase issue competence among voters in areas ranging from gun regulation to climate change. To attack the problem, he develops an arsenal of techniques to effectively convey to people information they actually care about.

Citizens sometimes lack the knowledge that they need to make competent political choices, and it is undeniable that greater knowledge can improve decision making. But we need to understand that voters either don’t care about or pay attention to much of the information that experts think is important. Uninformed provides the keys to improving political knowledge and civic competence: understanding what information is important to others and knowing how to best convey it to them….(More)”

Beyond Distrust: How Americans View Their Government


Overview - 1Pew Research Center: “A year ahead of the presidential election, the American public is deeply cynical about government, politics and the nation’s elected leaders in a way that has become quite familiar.

Currently, just 19% say they can trust the government always or most of the time,among the lowest levels in the past half-century. Only 20% would describe government programs as being well-run. And elected officials are held in such low regard that 55% of the public says “ordinary Americans” would do a better job of solving national problems.

Yet at the same time, most Americans have a lengthy to-do list for this object of their frustration: Majorities want the federal government to have a major role in addressing issues ranging from terrorism and disaster response to education and the environment.

And most Americans like the way the federal government handles many of these same issues, though they are broadly critical of its handling of others – especially poverty and immigration.

A new national survey by Pew Research Center, based on more than 6,000 interviews conducted between August 27 and October 4, 2015, finds that public attitudes about government and politics defy easy categorization. The study builds upon previous reports about the government’s role and performance in 2010 and 1998. This report was made possible by The Pew Charitable Trusts, which received support for the survey from The William and Flora Hewlett Foundation.

The partisan divide over the size and scope of government remains as wide as ever: Support for smaller government endures as a Republican touchstone. Fully 80% of Republicans and Republican-leaning independents say they prefer a smaller government with fewer services, compared with just 31% of Democrats and Democratic leaners.

Yet both Republicans and Democrats favor significant government involvement on an array of specific issues. Among the public overall, majorities say the federal government should have a major role in dealing with 12 of 13 issues included in the survey, all except advancing space exploration.

There is bipartisan agreement that the federal government should play a major role in dealing with terrorism, natural disasters, food and medicine safety, and roads and infrastructure. And while the presidential campaign has exposed sharp partisan divisions over immigration policy, large majorities of both Republicans (85%) and Democrats (80%) say the government should have a major role in managing the immigration system.

But the partisan differences over government’s appropriate role are revealing – with the widest gaps on several issues relating to the social safety net….(More)

Questioning Smart Urbanism: Is Data-Driven Governance a Panacea?


 at the Chicago Policy Review: “In the era of data explosion, urban planners are increasingly relying on real-time, streaming data generated by “smart” devices to assist with city management. “Smart cities,” referring to cities that implement pervasive and ubiquitous computing in urban planning, are widely discussed in academia, business, and government. These cities are characterized not only by their use of technology but also by their innovation-driven economies and collaborative, data-driven city governance. Smart urbanism can seem like an effective strategy to create more efficient, sustainable, productive, and open cities. However, there are emerging concerns about the potential risks in the long-term development of smart cities, including political neutrality of big data, technocratic governance, technological lock-ins, data and network security, and privacy risks.

In a study entitled, “The Real-Time City? Big Data and Smart Urbanism,” Rob Kitchin provides a critical reflection on the potential negative effects of data-driven city governance on social development—a topic he claims deserves greater governmental, academic, and social attention.

In contrast to traditional datasets that rely on samples or are aggregated to a coarse scale, “big data” is huge in volume, high in velocity, and diverse in variety. Since the early 2000s, there has been explosive growth in data volume due to the rapid development and implementation of technology infrastructure, including networks, information management, and data storage. Big data can be generated from directed, automated, and volunteered sources. Automated data generation is of particular interest to urban planners. One example Kitchin cites is urban sensor networks, which allow city governments to monitor the movements and statuses of individuals, materials, and structures throughout the urban environment by analyzing real-time data.

With the huge amount of streaming data collected by smart infrastructure, many city governments use real-time analysis to manage different aspects of city operations. There has been a recent trend in centralizing data streams into a single hub, integrating all kinds of surveillance and analytics. These one-stop data centers make it easier for analysts to cross-reference data, spot patterns, identify problems, and allocate resources. The data are also often accessible by field workers via operations platforms. In London and some other cities, real-time data are visualized on “city dashboards” and communicated to citizens, providing convenient access to city information.

However, the real-time city is not a flawless solution to all the problems faced by city managers. The primary concern is the politics of big, urban data. Although raw data are often perceived as neutral and objective, no data are free of bias; the collection of data is a subjective process that can be shaped by various confounding factors. The presentation of data can also be manipulated to answer a specific question or enact a particular political vision….(More)”

Build digital democracy


Dirk Helbing & Evangelos Pournaras in Nature: “Fridges, coffee machines, toothbrushes, phones and smart devices are all now equipped with communicating sensors. In ten years, 150 billion ‘things’ will connect with each other and with billions of people. The ‘Internet of Things’ will generate data volumes that double every 12 hours rather than every 12 months, as is the case now.

Blinded by information, we need ‘digital sunglasses’. Whoever builds the filters to monetize this information determines what we see — Google and Facebook, for example. Many choices that people consider their own are already determined by algorithms. Such remote control weakens responsible, self-determined decision-making and thus society too.

The European Court of Justice’s ruling on 6 October that countries and companies must comply with European data-protection laws when transferring data outside the European Union demonstrates that a new digital paradigm is overdue. To ensure that no government, company or person with sole control of digital filters can manipulate our decisions, we need information systems that are transparent, trustworthy and user-controlled. Each of us must be able to choose, modify and build our own tools for winnowing information.

With this in mind, our research team at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), alongside international partners, has started to create a distributed, privacy-preserving ‘digital nervous system’ called Nervousnet. Nervousnet uses the sensor networks that make up the Internet of Things, including those in smartphones, to measure the world around us and to build a collective ‘data commons’. The many challenges ahead will be best solved using an open, participatory platform, an approach that has proved successful for projects such as Wikipedia and the open-source operating system Linux.

A wise king?

The science of human decision-making is far from understood. Yet our habits, routines and social interactions are surprisingly predictable. Our behaviour is increasingly steered by personalized advertisements and search results, recommendation systems and emotion-tracking technologies. Thousands of pieces of metadata have been collected about every one of us (seego.nature.com/stoqsu). Companies and governments can increasingly manipulate our decisions, behaviour and feelings1.

Many policymakers believe that personal data may be used to ‘nudge’ people to make healthier and environmentally friendly decisions. Yet the same technology may also promote nationalism, fuel hate against minorities or skew election outcomes2 if ethical scrutiny, transparency and democratic control are lacking — as they are in most private companies and institutions that use ‘big data’. The combination of nudging with big data about everyone’s behaviour, feelings and interests (‘big nudging’, if you will) could eventually create close to totalitarian power.

Countries have long experimented with using data to run their societies. In the 1970s, Chilean President Salvador Allende created computer networks to optimize industrial productivity3. Today, Singapore considers itself a data-driven ‘social laboratory’4 and other countries seem keen to copy this model.

The Chinese government has begun rating the behaviour of its citizens5. Loans, jobs and travel visas will depend on an individual’s ‘citizen score’, their web history and political opinion. Meanwhile, Baidu — the Chinese equivalent of Google — is joining forces with the military for the ‘China brain project’, using ‘deep learning’ artificial-intelligence algorithms to predict the behaviour of people on the basis of their Internet activity6.

The intentions may be good: it is hoped that big data can improve governance by overcoming irrationality and partisan interests. But the situation also evokes the warning of the eighteenth-century philosopher Immanuel Kant, that the “sovereign acting … to make the people happy according to his notions … becomes a despot”. It is for this reason that the US Declaration of Independence emphasizes the pursuit of happiness of individuals.

Ruling like a ‘benevolent dictator’ or ‘wise king’ cannot work because there is no way to determine a single metric or goal that a leader should maximize. Should it be gross domestic product per capita or sustainability, power or peace, average life span or happiness, or something else?

Better is pluralism. It hedges risks, promotes innovation, collective intelligence and well-being. Approaching complex problems from varied perspectives also helps people to cope with rare and extreme events that are costly for society — such as natural disasters, blackouts or financial meltdowns.

Centralized, top-down control of data has various flaws. First, it will inevitably become corrupted or hacked by extremists or criminals. Second, owing to limitations in data-transmission rates and processing power, top-down solutions often fail to address local needs. Third, manipulating the search for information and intervening in individual choices undermines ‘collective intelligence’7. Fourth, personalized information creates ‘filter bubbles’8. People are exposed less to other opinions, which can increase polarization and conflict9.

Fifth, reducing pluralism is as bad as losing biodiversity, because our economies and societies are like ecosystems with millions of interdependencies. Historically, a reduction in diversity has often led to political instability, collapse or war. Finally, by altering the cultural cues that guide peoples’ decisions, everyday decision-making is disrupted, which undermines rather than bolsters social stability and order.

Big data should be used to solve the world’s problems, not for illegitimate manipulation. But the assumption that ‘more data equals more knowledge, power and success’ does not hold. Although we have never had so much information, we face ever more global threats, including climate change, unstable peace and socio-economic fragility, and political satisfaction is low worldwide. About 50% of today’s jobs will be lost in the next two decades as computers and robots take over tasks. But will we see the macroeconomic benefits that would justify such large-scale ‘creative destruction’? And how can we reinvent half of our economy?

The digital revolution will mainly benefit countries that achieve a ‘win–win–win’ situation for business, politics and citizens alike10. To mobilize the ideas, skills and resources of all, we must build information systems capable of bringing diverse knowledge and ideas together. Online deliberation platforms and reconfigurable networks of smart human minds and artificially intelligent systems can now be used to produce collective intelligence that can cope with the diverse and complex challenges surrounding us….(More)” See Nervousnet project

‘Democracy vouchers’


Gregory Krieg at CNN: “Democracy vouchers” could be coming to an election near you. Last week, more than 60% of Seattle voters approved the so-called “Honest Elections” measure, or Initiative 122, a campaign finance reform plan offering a novel way of steering public funds to candidates who are willing to swear off big money PACs.

For supporters, the victory — authorizing the use by voters of publicly funded “democracy vouchers” that they can dole out to favored candidates — marks what they hope will be the first step forward in a wide-ranging reform effort spreading to other cities and states in the coming year….

The voucher model also is “a one-two punch” for candidates, Silver said. “They become more dependent on their constituents because their constituents become their funders, and No. 2, they’re part of what I would call a ‘dilution strategy’ — you dilute the space with lots of small-dollar contributions to offset the undue influence of super PACs.”

How “democracy vouchers” work

Beginning next summer, Seattle voters are expected to begin receiving $100 from the city, parceled out in four $25 vouchers, to contribute to local candidates who accept the new law’s restrictions, including not taking funds from PACs, adhering to strict spending caps, and enacting greater transparency. Candidates can redeem the vouchers with the city for real campaign cash, which will likely flow from increased property taxes.

The reform effort began at the grassroots, but morphed into a slickly managed operation that spent nearly $1.4 million, with more than half of that flowing from groups outside the city.

Alan Durning, founder of the nonprofit sustainability think tank Sightline, is an architect of the Seattle initiative. He believes the campaign helped identify a key problem with other reform plans.

“We know that one of the strongest arguments against public funding for campaigns is the idea of giving tax dollars to candidates that you disagree with,” Durning told CNN. “There are a lot of people who hate the idea.”

Currently, most such programs offer to match with public funds small donations for candidates who meet a host of varying requirements. In these cases, taxpayer money goes directly from the government to the campaigns, limiting voters’ connection to the process.

“The benefit of vouchers … is you can think about it as giving the first $100 of your own taxes to the candidate that you prefer,” Durning explained. “Your money is going to the candidate you send it to — so it keeps the choice with the individual voter.”

He added that the use of vouchers can also help the approach appeal to conservative voters, who generally are supportive of voucher-type programs and choice.

But critics call that a misleading argument.

“You’re still taking money from people and giving it to politicians who they may not necessarily want to support,” said Patrick Basham, the founder and director of the Democracy Institute, a libertarian think tank.

“Now, if you, as Voter X, give your four $25 vouchers to Candidate Y, then that’s your choice, but only some of [the money] came from you. It also came from other people.”…(More)”

Politics and the New Machine


Jill Lepore in the NewYorker on “What the turn from polls to data science means for democracy”: “…The modern public-opinion poll has been around since the Great Depression, when the response rate—the number of people who take a survey as a percentage of those who were asked—was more than ninety. The participation rate—the number of people who take a survey as a percentage of the population—is far lower. Election pollsters sample only a minuscule portion of the electorate, not uncommonly something on the order of a couple of thousand people out of the more than two hundred million Americans who are eligible to vote. The promise of this work is that the sample is exquisitely representative. But the lower the response rate the harder and more expensive it becomes to realize that promise, which requires both calling many more people and trying to correct for “non-response bias” by giving greater weight to the answers of people from demographic groups that are less likely to respond. Pollster.com’s Mark Blumenthal has recalled how, in the nineteen-eighties, when the response rate at the firm where he was working had fallen to about sixty per cent, people in his office said, “What will happen when it’s only twenty? We won’t be able to be in business!” A typical response rate is now in the single digits.

Meanwhile, polls are wielding greater influence over American elections than ever….

Still, data science can’t solve the biggest problem with polling, because that problem is neither methodological nor technological. It’s political. Pollsters rose to prominence by claiming that measuring public opinion is good for democracy. But what if it’s bad?

A “poll” used to mean the top of your head. Ophelia says of Polonius, “His beard as white as snow: All flaxen was his poll.” When voting involved assembling (all in favor of Smith stand here, all in favor of Jones over there), counting votes required counting heads; that is, counting polls. Eventually, a “poll” came to mean the count itself. By the nineteenth century, to vote was to go “to the polls,” where, more and more, voting was done on paper. Ballots were often printed in newspapers: you’d cut one out and bring it with you. With the turn to the secret ballot, beginning in the eighteen-eighties, the government began supplying the ballots, but newspapers kept printing them; they’d use them to conduct their own polls, called “straw polls.” Before the election, you’d cut out your ballot and mail it to the newspaper, which would make a prediction. Political parties conducted straw polls, too. That’s one of the ways the political machine worked….

Ever since Gallup, two things have been called polls: surveys of opinions and forecasts of election results. (Plenty of other surveys, of course, don’t measure opinions but instead concern status and behavior: Do you own a house? Have you seen a doctor in the past month?) It’s not a bad idea to reserve the term “polls” for the kind meant to produce election forecasts. When Gallup started out, he was skeptical about using a survey to forecast an election: “Such a test is by no means perfect, because a preelection survey must not only measure public opinion in respect to candidates but must also predict just what groups of people will actually take the trouble to cast their ballots.” Also, he didn’t think that predicting elections constituted a public good: “While such forecasts provide an interesting and legitimate activity, they probably serve no great social purpose.” Then why do it? Gallup conducted polls only to prove the accuracy of his surveys, there being no other way to demonstrate it. The polls themselves, he thought, were pointless…

If public-opinion polling is the child of a strained marriage between the press and the academy, data science is the child of a rocky marriage between the academy and Silicon Valley. The term “data science” was coined in 1960, one year after the Democratic National Committee hired Simulmatics Corporation, a company founded by Ithiel de Sola Pool, a political scientist from M.I.T., to provide strategic analysis in advance of the upcoming Presidential election. Pool and his team collected punch cards from pollsters who had archived more than sixty polls from the elections of 1952, 1954, 1956, 1958, and 1960, representing more than a hundred thousand interviews, and fed them into a UNIVAC. They then sorted voters into four hundred and eighty possible types (for example, “Eastern, metropolitan, lower-income, white, Catholic, female Democrat”) and sorted issues into fifty-two clusters (for example, foreign aid). Simulmatics’ first task, completed just before the Democratic National Convention, was a study of “the Negro vote in the North.” Its report, which is thought to have influenced the civil-rights paragraphs added to the Party’s platform, concluded that between 1954 and 1956 “a small but significant shift to the Republicans occurred among Northern Negroes, which cost the Democrats about 1 per cent of the total votes in 8 key states.” After the nominating convention, the D.N.C. commissioned Simulmatics to prepare three more reports, including one that involved running simulations about different ways in which Kennedy might discuss his Catholicism….

Data science may well turn out to be as flawed as public-opinion polling. But a stage in the development of any new tool is to imagine that you’ve perfected it, in order to ponder its consequences. I asked Hilton to suppose that there existed a flawless tool for measuring public opinion, accurately and instantly, a tool available to voters and politicians alike. Imagine that you’re a member of Congress, I said, and you’re about to head into the House to vote on an act—let’s call it the Smeadwell-Nutley Act. As you do, you use an app called iThePublic to learn the opinions of your constituents. You oppose Smeadwell-Nutley; your constituents are seventy-nine per cent in favor of it. Your constituents will instantly know how you’ve voted, and many have set up an account with Crowdpac to make automatic campaign donations. If you vote against the proposed legislation, your constituents will stop giving money to your reëlection campaign. If, contrary to your convictions but in line with your iThePublic, you vote for Smeadwell-Nutley, would that be democracy? …(More)”

 

Statistical objectivity is a cloak spun from political yarn


Angus Deaton at the Financial Times: “The word data means things that are “given”: baseline truths, not things that are manufactured, invented, tailored or spun. Especially not by politics or politicians. Yet this absolutist view can be a poor guide to using the numbers well. Statistics are far from politics-free; indeed, politics is encoded in their genes. This is ultimately a good thing.

We like to deal with facts, not factoids. We are scandalised when politicians try to censor numbers or twist them, and most statistical offices have protocols designed to prevent such abuse. Headline statistics often seem simple but typically have many moving parts. A clock has two hands and 12 numerals yet underneath there may be thousands of springs, cogs and wheels. Politics is not only about telling the time, or whether the clock is slow or fast, but also about how to design the cogs and wheels. Down in the works, even where the decisions are delegated to bureaucrats and statisticians, there is room for politics to masquerade as science. A veneer of apolitical objectivity can be an effective disguise for a political programme.

Just occasionally, however, the mask drops and the design of the cogs and wheels moves into the glare of frontline politics. Consumer price indexes are leading examples of this. Britain’s first consumer price index was based on spending patterns from 1904. Long before the second world war, these weights were grotesquely outdated. During the war, the cabinet was worried about a wage-price spiral and the Treasury committed to hold the now-irrelevant index below the astonishingly precise value of 201.5 (1914=100) through a policy of food subsidies. It would, for example, respond to an increase in the price of eggs by lowering the price of sugar. Reform of the index would have jeopardised the government’s ability to control it and was too politically risky. The index was not updated until 1947….

These examples show the role of politics needs to be understood, and built in to any careful interpretation of the data. We must always work from multiple sources, and look deep into the cogs and wheels. James Scott, the political scientist, noted that statistics are how the state sees. The state decides what it needs to see and how to see it. That politics infuses every part of this is a testament to the importance of the numbers; lives depend on what they show.

For global poverty or hunger statistics, there is no state and no one’s material wellbeing depends on them. Politicians are less likely to interfere with such data, but this also removes a vital source of monitoring and accountability. Politics is a danger to good data; but without politics data are unlikely to be good, or at least not for long….(More)”

 

Interdisciplinary Perspectives on Trust


Book edited by Shockley, E., Neal, T.M.S., PytlikZillig, L.M., and Bornstein, B.H.:  “This timely collection explores trust research from many angles while ably demonstrating the potential of cross-discipline collaboration to deepen our understanding of institutional trust. Citing, among other things, current breakdowns of trust in prominent institutions, the book presents a multilevel model identifying universal aspects of trust as well as domain- and context-specific variations deserving further study. Contributors analyze similarities and differences in trust across public domains from politics and policing to medicine and science, and across languages and nations. Innovative strategies for measuring and assessing trust also shed new light on this essentially human behavior.

Highlights of the coverage:

  • Consensus on conceptualizations and definitions of trust: are we there yet?
  • Differentiating between trust and legitimacy in public attitudes towards legal authority.
  • Examining the relationship between interpersonal and institutional trust in political and health care contexts.
  • Trust as a multilevel phenomenon across contexts.
  • Institutional trust across cultures.
  • The “dark side” of institutional trust….(more)”