Beyond Distrust: How Americans View Their Government


Overview - 1Pew Research Center: “A year ahead of the presidential election, the American public is deeply cynical about government, politics and the nation’s elected leaders in a way that has become quite familiar.

Currently, just 19% say they can trust the government always or most of the time,among the lowest levels in the past half-century. Only 20% would describe government programs as being well-run. And elected officials are held in such low regard that 55% of the public says “ordinary Americans” would do a better job of solving national problems.

Yet at the same time, most Americans have a lengthy to-do list for this object of their frustration: Majorities want the federal government to have a major role in addressing issues ranging from terrorism and disaster response to education and the environment.

And most Americans like the way the federal government handles many of these same issues, though they are broadly critical of its handling of others – especially poverty and immigration.

A new national survey by Pew Research Center, based on more than 6,000 interviews conducted between August 27 and October 4, 2015, finds that public attitudes about government and politics defy easy categorization. The study builds upon previous reports about the government’s role and performance in 2010 and 1998. This report was made possible by The Pew Charitable Trusts, which received support for the survey from The William and Flora Hewlett Foundation.

The partisan divide over the size and scope of government remains as wide as ever: Support for smaller government endures as a Republican touchstone. Fully 80% of Republicans and Republican-leaning independents say they prefer a smaller government with fewer services, compared with just 31% of Democrats and Democratic leaners.

Yet both Republicans and Democrats favor significant government involvement on an array of specific issues. Among the public overall, majorities say the federal government should have a major role in dealing with 12 of 13 issues included in the survey, all except advancing space exploration.

There is bipartisan agreement that the federal government should play a major role in dealing with terrorism, natural disasters, food and medicine safety, and roads and infrastructure. And while the presidential campaign has exposed sharp partisan divisions over immigration policy, large majorities of both Republicans (85%) and Democrats (80%) say the government should have a major role in managing the immigration system.

But the partisan differences over government’s appropriate role are revealing – with the widest gaps on several issues relating to the social safety net….(More)

Questioning Smart Urbanism: Is Data-Driven Governance a Panacea?


 at the Chicago Policy Review: “In the era of data explosion, urban planners are increasingly relying on real-time, streaming data generated by “smart” devices to assist with city management. “Smart cities,” referring to cities that implement pervasive and ubiquitous computing in urban planning, are widely discussed in academia, business, and government. These cities are characterized not only by their use of technology but also by their innovation-driven economies and collaborative, data-driven city governance. Smart urbanism can seem like an effective strategy to create more efficient, sustainable, productive, and open cities. However, there are emerging concerns about the potential risks in the long-term development of smart cities, including political neutrality of big data, technocratic governance, technological lock-ins, data and network security, and privacy risks.

In a study entitled, “The Real-Time City? Big Data and Smart Urbanism,” Rob Kitchin provides a critical reflection on the potential negative effects of data-driven city governance on social development—a topic he claims deserves greater governmental, academic, and social attention.

In contrast to traditional datasets that rely on samples or are aggregated to a coarse scale, “big data” is huge in volume, high in velocity, and diverse in variety. Since the early 2000s, there has been explosive growth in data volume due to the rapid development and implementation of technology infrastructure, including networks, information management, and data storage. Big data can be generated from directed, automated, and volunteered sources. Automated data generation is of particular interest to urban planners. One example Kitchin cites is urban sensor networks, which allow city governments to monitor the movements and statuses of individuals, materials, and structures throughout the urban environment by analyzing real-time data.

With the huge amount of streaming data collected by smart infrastructure, many city governments use real-time analysis to manage different aspects of city operations. There has been a recent trend in centralizing data streams into a single hub, integrating all kinds of surveillance and analytics. These one-stop data centers make it easier for analysts to cross-reference data, spot patterns, identify problems, and allocate resources. The data are also often accessible by field workers via operations platforms. In London and some other cities, real-time data are visualized on “city dashboards” and communicated to citizens, providing convenient access to city information.

However, the real-time city is not a flawless solution to all the problems faced by city managers. The primary concern is the politics of big, urban data. Although raw data are often perceived as neutral and objective, no data are free of bias; the collection of data is a subjective process that can be shaped by various confounding factors. The presentation of data can also be manipulated to answer a specific question or enact a particular political vision….(More)”

Build digital democracy


Dirk Helbing & Evangelos Pournaras in Nature: “Fridges, coffee machines, toothbrushes, phones and smart devices are all now equipped with communicating sensors. In ten years, 150 billion ‘things’ will connect with each other and with billions of people. The ‘Internet of Things’ will generate data volumes that double every 12 hours rather than every 12 months, as is the case now.

Blinded by information, we need ‘digital sunglasses’. Whoever builds the filters to monetize this information determines what we see — Google and Facebook, for example. Many choices that people consider their own are already determined by algorithms. Such remote control weakens responsible, self-determined decision-making and thus society too.

The European Court of Justice’s ruling on 6 October that countries and companies must comply with European data-protection laws when transferring data outside the European Union demonstrates that a new digital paradigm is overdue. To ensure that no government, company or person with sole control of digital filters can manipulate our decisions, we need information systems that are transparent, trustworthy and user-controlled. Each of us must be able to choose, modify and build our own tools for winnowing information.

With this in mind, our research team at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), alongside international partners, has started to create a distributed, privacy-preserving ‘digital nervous system’ called Nervousnet. Nervousnet uses the sensor networks that make up the Internet of Things, including those in smartphones, to measure the world around us and to build a collective ‘data commons’. The many challenges ahead will be best solved using an open, participatory platform, an approach that has proved successful for projects such as Wikipedia and the open-source operating system Linux.

A wise king?

The science of human decision-making is far from understood. Yet our habits, routines and social interactions are surprisingly predictable. Our behaviour is increasingly steered by personalized advertisements and search results, recommendation systems and emotion-tracking technologies. Thousands of pieces of metadata have been collected about every one of us (seego.nature.com/stoqsu). Companies and governments can increasingly manipulate our decisions, behaviour and feelings1.

Many policymakers believe that personal data may be used to ‘nudge’ people to make healthier and environmentally friendly decisions. Yet the same technology may also promote nationalism, fuel hate against minorities or skew election outcomes2 if ethical scrutiny, transparency and democratic control are lacking — as they are in most private companies and institutions that use ‘big data’. The combination of nudging with big data about everyone’s behaviour, feelings and interests (‘big nudging’, if you will) could eventually create close to totalitarian power.

Countries have long experimented with using data to run their societies. In the 1970s, Chilean President Salvador Allende created computer networks to optimize industrial productivity3. Today, Singapore considers itself a data-driven ‘social laboratory’4 and other countries seem keen to copy this model.

The Chinese government has begun rating the behaviour of its citizens5. Loans, jobs and travel visas will depend on an individual’s ‘citizen score’, their web history and political opinion. Meanwhile, Baidu — the Chinese equivalent of Google — is joining forces with the military for the ‘China brain project’, using ‘deep learning’ artificial-intelligence algorithms to predict the behaviour of people on the basis of their Internet activity6.

The intentions may be good: it is hoped that big data can improve governance by overcoming irrationality and partisan interests. But the situation also evokes the warning of the eighteenth-century philosopher Immanuel Kant, that the “sovereign acting … to make the people happy according to his notions … becomes a despot”. It is for this reason that the US Declaration of Independence emphasizes the pursuit of happiness of individuals.

Ruling like a ‘benevolent dictator’ or ‘wise king’ cannot work because there is no way to determine a single metric or goal that a leader should maximize. Should it be gross domestic product per capita or sustainability, power or peace, average life span or happiness, or something else?

Better is pluralism. It hedges risks, promotes innovation, collective intelligence and well-being. Approaching complex problems from varied perspectives also helps people to cope with rare and extreme events that are costly for society — such as natural disasters, blackouts or financial meltdowns.

Centralized, top-down control of data has various flaws. First, it will inevitably become corrupted or hacked by extremists or criminals. Second, owing to limitations in data-transmission rates and processing power, top-down solutions often fail to address local needs. Third, manipulating the search for information and intervening in individual choices undermines ‘collective intelligence’7. Fourth, personalized information creates ‘filter bubbles’8. People are exposed less to other opinions, which can increase polarization and conflict9.

Fifth, reducing pluralism is as bad as losing biodiversity, because our economies and societies are like ecosystems with millions of interdependencies. Historically, a reduction in diversity has often led to political instability, collapse or war. Finally, by altering the cultural cues that guide peoples’ decisions, everyday decision-making is disrupted, which undermines rather than bolsters social stability and order.

Big data should be used to solve the world’s problems, not for illegitimate manipulation. But the assumption that ‘more data equals more knowledge, power and success’ does not hold. Although we have never had so much information, we face ever more global threats, including climate change, unstable peace and socio-economic fragility, and political satisfaction is low worldwide. About 50% of today’s jobs will be lost in the next two decades as computers and robots take over tasks. But will we see the macroeconomic benefits that would justify such large-scale ‘creative destruction’? And how can we reinvent half of our economy?

The digital revolution will mainly benefit countries that achieve a ‘win–win–win’ situation for business, politics and citizens alike10. To mobilize the ideas, skills and resources of all, we must build information systems capable of bringing diverse knowledge and ideas together. Online deliberation platforms and reconfigurable networks of smart human minds and artificially intelligent systems can now be used to produce collective intelligence that can cope with the diverse and complex challenges surrounding us….(More)” See Nervousnet project

‘Democracy vouchers’


Gregory Krieg at CNN: “Democracy vouchers” could be coming to an election near you. Last week, more than 60% of Seattle voters approved the so-called “Honest Elections” measure, or Initiative 122, a campaign finance reform plan offering a novel way of steering public funds to candidates who are willing to swear off big money PACs.

For supporters, the victory — authorizing the use by voters of publicly funded “democracy vouchers” that they can dole out to favored candidates — marks what they hope will be the first step forward in a wide-ranging reform effort spreading to other cities and states in the coming year….

The voucher model also is “a one-two punch” for candidates, Silver said. “They become more dependent on their constituents because their constituents become their funders, and No. 2, they’re part of what I would call a ‘dilution strategy’ — you dilute the space with lots of small-dollar contributions to offset the undue influence of super PACs.”

How “democracy vouchers” work

Beginning next summer, Seattle voters are expected to begin receiving $100 from the city, parceled out in four $25 vouchers, to contribute to local candidates who accept the new law’s restrictions, including not taking funds from PACs, adhering to strict spending caps, and enacting greater transparency. Candidates can redeem the vouchers with the city for real campaign cash, which will likely flow from increased property taxes.

The reform effort began at the grassroots, but morphed into a slickly managed operation that spent nearly $1.4 million, with more than half of that flowing from groups outside the city.

Alan Durning, founder of the nonprofit sustainability think tank Sightline, is an architect of the Seattle initiative. He believes the campaign helped identify a key problem with other reform plans.

“We know that one of the strongest arguments against public funding for campaigns is the idea of giving tax dollars to candidates that you disagree with,” Durning told CNN. “There are a lot of people who hate the idea.”

Currently, most such programs offer to match with public funds small donations for candidates who meet a host of varying requirements. In these cases, taxpayer money goes directly from the government to the campaigns, limiting voters’ connection to the process.

“The benefit of vouchers … is you can think about it as giving the first $100 of your own taxes to the candidate that you prefer,” Durning explained. “Your money is going to the candidate you send it to — so it keeps the choice with the individual voter.”

He added that the use of vouchers can also help the approach appeal to conservative voters, who generally are supportive of voucher-type programs and choice.

But critics call that a misleading argument.

“You’re still taking money from people and giving it to politicians who they may not necessarily want to support,” said Patrick Basham, the founder and director of the Democracy Institute, a libertarian think tank.

“Now, if you, as Voter X, give your four $25 vouchers to Candidate Y, then that’s your choice, but only some of [the money] came from you. It also came from other people.”…(More)”

Politics and the New Machine


Jill Lepore in the NewYorker on “What the turn from polls to data science means for democracy”: “…The modern public-opinion poll has been around since the Great Depression, when the response rate—the number of people who take a survey as a percentage of those who were asked—was more than ninety. The participation rate—the number of people who take a survey as a percentage of the population—is far lower. Election pollsters sample only a minuscule portion of the electorate, not uncommonly something on the order of a couple of thousand people out of the more than two hundred million Americans who are eligible to vote. The promise of this work is that the sample is exquisitely representative. But the lower the response rate the harder and more expensive it becomes to realize that promise, which requires both calling many more people and trying to correct for “non-response bias” by giving greater weight to the answers of people from demographic groups that are less likely to respond. Pollster.com’s Mark Blumenthal has recalled how, in the nineteen-eighties, when the response rate at the firm where he was working had fallen to about sixty per cent, people in his office said, “What will happen when it’s only twenty? We won’t be able to be in business!” A typical response rate is now in the single digits.

Meanwhile, polls are wielding greater influence over American elections than ever….

Still, data science can’t solve the biggest problem with polling, because that problem is neither methodological nor technological. It’s political. Pollsters rose to prominence by claiming that measuring public opinion is good for democracy. But what if it’s bad?

A “poll” used to mean the top of your head. Ophelia says of Polonius, “His beard as white as snow: All flaxen was his poll.” When voting involved assembling (all in favor of Smith stand here, all in favor of Jones over there), counting votes required counting heads; that is, counting polls. Eventually, a “poll” came to mean the count itself. By the nineteenth century, to vote was to go “to the polls,” where, more and more, voting was done on paper. Ballots were often printed in newspapers: you’d cut one out and bring it with you. With the turn to the secret ballot, beginning in the eighteen-eighties, the government began supplying the ballots, but newspapers kept printing them; they’d use them to conduct their own polls, called “straw polls.” Before the election, you’d cut out your ballot and mail it to the newspaper, which would make a prediction. Political parties conducted straw polls, too. That’s one of the ways the political machine worked….

Ever since Gallup, two things have been called polls: surveys of opinions and forecasts of election results. (Plenty of other surveys, of course, don’t measure opinions but instead concern status and behavior: Do you own a house? Have you seen a doctor in the past month?) It’s not a bad idea to reserve the term “polls” for the kind meant to produce election forecasts. When Gallup started out, he was skeptical about using a survey to forecast an election: “Such a test is by no means perfect, because a preelection survey must not only measure public opinion in respect to candidates but must also predict just what groups of people will actually take the trouble to cast their ballots.” Also, he didn’t think that predicting elections constituted a public good: “While such forecasts provide an interesting and legitimate activity, they probably serve no great social purpose.” Then why do it? Gallup conducted polls only to prove the accuracy of his surveys, there being no other way to demonstrate it. The polls themselves, he thought, were pointless…

If public-opinion polling is the child of a strained marriage between the press and the academy, data science is the child of a rocky marriage between the academy and Silicon Valley. The term “data science” was coined in 1960, one year after the Democratic National Committee hired Simulmatics Corporation, a company founded by Ithiel de Sola Pool, a political scientist from M.I.T., to provide strategic analysis in advance of the upcoming Presidential election. Pool and his team collected punch cards from pollsters who had archived more than sixty polls from the elections of 1952, 1954, 1956, 1958, and 1960, representing more than a hundred thousand interviews, and fed them into a UNIVAC. They then sorted voters into four hundred and eighty possible types (for example, “Eastern, metropolitan, lower-income, white, Catholic, female Democrat”) and sorted issues into fifty-two clusters (for example, foreign aid). Simulmatics’ first task, completed just before the Democratic National Convention, was a study of “the Negro vote in the North.” Its report, which is thought to have influenced the civil-rights paragraphs added to the Party’s platform, concluded that between 1954 and 1956 “a small but significant shift to the Republicans occurred among Northern Negroes, which cost the Democrats about 1 per cent of the total votes in 8 key states.” After the nominating convention, the D.N.C. commissioned Simulmatics to prepare three more reports, including one that involved running simulations about different ways in which Kennedy might discuss his Catholicism….

Data science may well turn out to be as flawed as public-opinion polling. But a stage in the development of any new tool is to imagine that you’ve perfected it, in order to ponder its consequences. I asked Hilton to suppose that there existed a flawless tool for measuring public opinion, accurately and instantly, a tool available to voters and politicians alike. Imagine that you’re a member of Congress, I said, and you’re about to head into the House to vote on an act—let’s call it the Smeadwell-Nutley Act. As you do, you use an app called iThePublic to learn the opinions of your constituents. You oppose Smeadwell-Nutley; your constituents are seventy-nine per cent in favor of it. Your constituents will instantly know how you’ve voted, and many have set up an account with Crowdpac to make automatic campaign donations. If you vote against the proposed legislation, your constituents will stop giving money to your reëlection campaign. If, contrary to your convictions but in line with your iThePublic, you vote for Smeadwell-Nutley, would that be democracy? …(More)”

 

Statistical objectivity is a cloak spun from political yarn


Angus Deaton at the Financial Times: “The word data means things that are “given”: baseline truths, not things that are manufactured, invented, tailored or spun. Especially not by politics or politicians. Yet this absolutist view can be a poor guide to using the numbers well. Statistics are far from politics-free; indeed, politics is encoded in their genes. This is ultimately a good thing.

We like to deal with facts, not factoids. We are scandalised when politicians try to censor numbers or twist them, and most statistical offices have protocols designed to prevent such abuse. Headline statistics often seem simple but typically have many moving parts. A clock has two hands and 12 numerals yet underneath there may be thousands of springs, cogs and wheels. Politics is not only about telling the time, or whether the clock is slow or fast, but also about how to design the cogs and wheels. Down in the works, even where the decisions are delegated to bureaucrats and statisticians, there is room for politics to masquerade as science. A veneer of apolitical objectivity can be an effective disguise for a political programme.

Just occasionally, however, the mask drops and the design of the cogs and wheels moves into the glare of frontline politics. Consumer price indexes are leading examples of this. Britain’s first consumer price index was based on spending patterns from 1904. Long before the second world war, these weights were grotesquely outdated. During the war, the cabinet was worried about a wage-price spiral and the Treasury committed to hold the now-irrelevant index below the astonishingly precise value of 201.5 (1914=100) through a policy of food subsidies. It would, for example, respond to an increase in the price of eggs by lowering the price of sugar. Reform of the index would have jeopardised the government’s ability to control it and was too politically risky. The index was not updated until 1947….

These examples show the role of politics needs to be understood, and built in to any careful interpretation of the data. We must always work from multiple sources, and look deep into the cogs and wheels. James Scott, the political scientist, noted that statistics are how the state sees. The state decides what it needs to see and how to see it. That politics infuses every part of this is a testament to the importance of the numbers; lives depend on what they show.

For global poverty or hunger statistics, there is no state and no one’s material wellbeing depends on them. Politicians are less likely to interfere with such data, but this also removes a vital source of monitoring and accountability. Politics is a danger to good data; but without politics data are unlikely to be good, or at least not for long….(More)”

 

Interdisciplinary Perspectives on Trust


Book edited by Shockley, E., Neal, T.M.S., PytlikZillig, L.M., and Bornstein, B.H.:  “This timely collection explores trust research from many angles while ably demonstrating the potential of cross-discipline collaboration to deepen our understanding of institutional trust. Citing, among other things, current breakdowns of trust in prominent institutions, the book presents a multilevel model identifying universal aspects of trust as well as domain- and context-specific variations deserving further study. Contributors analyze similarities and differences in trust across public domains from politics and policing to medicine and science, and across languages and nations. Innovative strategies for measuring and assessing trust also shed new light on this essentially human behavior.

Highlights of the coverage:

  • Consensus on conceptualizations and definitions of trust: are we there yet?
  • Differentiating between trust and legitimacy in public attitudes towards legal authority.
  • Examining the relationship between interpersonal and institutional trust in political and health care contexts.
  • Trust as a multilevel phenomenon across contexts.
  • Institutional trust across cultures.
  • The “dark side” of institutional trust….(more)”

Simpler, smarter and innovative public services


Northern Future Forum: “How can governments deliver services better and more efficiently? This is one of the key questions governments all over the world are constantly dealing with. In recent years countries have had to cut back government spending at the same time as demand from citizens for more high quality service is increasing. Public institutions, just as companies, must adapt and develop over time. Rapid technological advancements and societal changes have forced the public sector to reform the way it operates and delivers services. The public sector needs to innovate to adapt and advance in the 21st century.
There are a number of reasons why public sector innovation matters (Potts and Kastelle 2010):

  • The size of the public sector in terms of percentages of GDP makes public sectors large components of the macro economy in many countries. Public sector innovation can affect productivity growth by reducing costs of inputs, better organisation and increasing the value of outputs.
  • The need for evolving policy to match evolving economies.
  • The public sector sets the rules of the game for private sector innovation.

As pointed out there is clearly an imperative to innovate. However, public sector innovation can be difficult, as public services deal with complex problems that have contradictory and diverse demands, need to respond quickly, whilst being transparent and accountable. Public sector innovation has a part to play to grow future economies, but also to develop the solutions to the biggest challenges facing most western nations today. These problems won’t be solved without strong leadership from the public sector and governments of the future. These issues are (Pollitt 2013):

  • Demographic change. The effects ageing of the general population will have on public services.
  • Climate change.
  • Economic trajectories, especially the effects of the current period of austerity.
  • Technological developments.
  • Public trust in government.
  • The changing nature of politics, with declining party loyalty, personalisation of politics, new parties, more media coverage etc.

According to the publications of national governments, the OECD, World Bank and the big international management consultancies, these issues will have major long-term impacts and implications (Pollitt 2013).
The essence of this background paper is to look at how governments can use innovation to help grow the economies and solve some of the biggest challenges of this generation and determine what the essentials to make it happen are. Firstly, a difficult economic environment in many countries tends to constrain the capacity of governments to deliver quality public services. Fiscal pressures, demographic changes, and diverse public and private demands all challenge traditional approaches and call for a rethinking of the way governments operate. There is a growing recognition that the complexity of the challenges facing the public sector cannot be solved by public sector institutions working alone, and that innovative solutions to public challenges require improved internal collaboration, as well as the involvement of external stakeholders partnering with public sector organisations (OECD 2015 a).
Willingness to solve some of these problems is not enough. The system that most western countries have created is in many ways a barrier to innovation. For instance, the public sector can lack innovative leaders and champions (Bason 2010, European Commission 2013), the way money is allocated, and reward and incentive systems can often hinder innovative performance (Kohli and Mulgan 2010), there may be limited knowledge of how to apply innovation processes and methods (European Commission 2013), and departmental silos can create significant challenges to ‘joined up’ problem solving (Carstensen and Bason 2012, Queensland Public Service Commission 2009).
There is not an established definition of innovation in the public sector. However some common elements have emerged from national and international research projects. The OECD has identified the following characteristics of public sector innovation:

  • Novelty: Innovations introduce new approaches, relative to the context where they are introduced.
  • Implementation: Innovations must be implemented, not just an idea.
  • Impact: Innovations aim to result in better public results including efficiency, effectiveness, and user or employee satisfaction.

Public sector innovation does not happen in a vacuum: problems need to be identified; ideas translated into projects which can be tested and then scaled up. For this to happen public sector organisations need to identify the processes and structures which can support and accelerate the innovation activity.
 Figure 1. Key components for successful public sector innovation.
Figure 1. Key components for successful public sector innovation.
The barriers to public sector innovation are in many ways the key to its success. In this background paper four key components for public sector innovation success will be discussed and ways to change them from barriers to supporters of innovation. The framework and the policy levers can play a key role in enabling and sustaining the innovation process:
These levers are:

  • Institutions. Innovation is likely to emerge from the interactions between different bodies.
  • Human Resources. Create ability, motivate and give the right opportunities.
  • Funding. Increase flexibility in allocating and managing financial resources.
  • Regulations. Processes need to be shortened and made more efficient.

Realising the potential of innovation means understanding which factors are most effective in creating the conditions for innovation to flourish, and assessing their relative impact on the capacity and performance of public sector organisations….(More). PDF: Simpler, smarter and innovative public services

Room for a View: Democracy as a Deliberative System


Involve: “Democratic reform comes in waves, propelled by technological, economic, political and social developments. There are periods of rapid change, followed by relative quiet.

We are currently in a period of significant political pressure for change to our institutions of democracy and government. With so many changes under discussion it is critically important that those proposing and carrying out reforms understand the impact that different reforms might have.

Most discussions of democratic reform focus on electoral democracy. However, for all their importance in the democratic system, elections rarely reveal what voters think clearly enough for elected representatives to act on them. Changing the electoral system will not alone significantly increase the level of democratic control held by citizens.

Room for a View, by Involve’s director Simon Burall, looks at democratic reform from a broader perspective than that of elections. Drawing on the work of democratic theorists, it uses a deliberative systems approach to examine the state of UK democracy. Rather than focusing exclusively on the extent to which individuals and communities are represented within institutions, it is equally concerned with the range of views present and how they interact.

Adapting the work of the democratic theorist John Dryzek, the report identifies seven components of the UK’s democratic system, describing and analysing the condition of each in turn. Assessing the UK’s democracy though this lens reveals it to be in fragile health. The representation of alternative views and narratives in all of the UK system’s seven components is poor, the components are weakly connected and, despite some positive signs, deliberative capacity is decreasing.

Room for a View suggests that a focus on the key institutions isn’t enough. If the health of UK democracy is to be improved, we need to move away from thinking about the representation of individual voters to thinking about the representation of views, perspectives and narratives. Doing this will fundamentally change the way we approach democratic reform.

Big data problems we face today can be traced to the social ordering practices of the 19th century.


Hamish Robertson and Joanne Travaglia in LSE’s The Impact Blog: “This is not the first ‘big data’ era but the second. The first was the explosion in data collection that occurred from the early 19th century – Hacking’s ‘avalanche of numbers’, precisely situated between 1820 and 1840. This was an analogue big data era, different to our current digital one but characterized by some very similar problems and concerns. Contemporary problems of data analysis and control include a variety of accepted factors that make them ‘big’ and these generally include size, complexity and technology issues. We also suggest that digitisation is a central process in this second big data era, one that seems obvious but which has also appears to have reached a new threshold. Until a decade or so ago ‘big data’ looked just like a digital version of conventional analogue records and systems. Ones whose management had become normalised through statistical and mathematical analysis. Now however we see a level of concern and anxiety, similar to the concerns that were faced in the first big data era.

This situation brings with it a socio-political dimension of interest to us, one in which our understanding of people and our actions on individuals, groups and populations are deeply implicated. The collection of social data had a purpose – understanding and controlling the population in a time of significant social change. To achieve this, new kinds of information and new methods for generating knowledge were required. Many ideas, concepts and categories developed during that first data revolution remain intact today, some uncritically accepted more now than when they were first developed. In this piece we draw out some connections between these two data ‘revolutions’ and the implications for the politics of information in contemporary society. It is clear that many of the problems in this first big data age and, more specifically, their solutions persist down to the present big data era….Our question then is how do we go about re-writing the ideological inheritance of that first data revolution? Can we or will we unpack the ideological sequelae of that past revolution during this present one? The initial indicators are not good in that there is a pervasive assumption in this broad interdisciplinary field that reductive categories are both necessary and natural. Our social ordering practices have influenced our social epistemology. We run the risk in the social sciences of perpetuating the ideological victories of the first data revolution as we progress through the second. The need for critical analysis grows apace not just with the production of each new technique or technology but with the uncritical acceptance of the concepts, categories and assumptions that emerged from that first data revolution. That first data revolution proved to be a successful anti-revolutionary response to the numerous threats to social order posed by the incredible changes of the nineteenth century, rather than the Enlightenment emancipation that was promised. (More)”

This is part of a wider series on the Politics of Data. For more on this topic, also see Mark Carrigan’sPhilosophy of Data Science interview series and the Discover Society special issue on the Politics of Data (Science).