Between Governance of the Past and Technology of the Future


Think Piece by Heather Grabbe for ESPAS 2016 conference: ” In many parts of everyday life, voters are used to a consumer experience where they get instant feedback and personal participation; but party membership, ballot boxes and stump speeches do not offer the same speed, control or personal engagement. The institutions of representative democracy at national and EU level — political parties, elected members, law-making — do not offer the same quality of experience for their ultimate consumers.

This matters because it is causing voters to switch off. Broad participation by most of the population in the practice of democracy is vital for societies to remain open because it ensures pluralism and prevents takeover of power by narrow interests. But in some countries and some elections, turnout is regularly below a third of registered voters, especially in European Parliament elections.

The internet is driving the major trends that create this disconnection and disruption. Here are four vital areas in which politics should adapt, including at EU level:

  • Expectation. Voters have a growing sense that political parties and law-making are out of touch, but not that politics is irrelevant. …
  • Affiliation. … people are interested in new forms of affiliation, especially through social media and alternative networks. …
  • Location. Digital technology allows people to find myriad new ways to express their political views publicly, outside of formal political spaces. …
  • Information. The internet has made vast amounts of data and a huge range of information sources across an enormous spectrum of issues available to every human with an internet connection. How is this information overload affecting engagement with politics? ….(More)”

What’s wrong with big data?


James Bridle in the New Humanist: “In a 2008 article in Wired magazine entitled “The End of Theory”, Chris Anderson argued that the vast amounts of data now available to researchers made the traditional scientific process obsolete. No longer would they need to build models of the world and test them against sampled data. Instead, the complexities of huge and totalising datasets would be processed by immense computing clusters to produce truth itself: “With enough data, the numbers speak for themselves.” As an example, Anderson cited Google’s translation algorithms which, with no knowledge of the underlying structures of languages, were capable of inferring the relationship between them using extensive corpora of translated texts. He extended this approach to genomics, neurology and physics, where scientists are increasingly turning to massive computation to make sense of the volumes of information they have gathered about complex systems. In the age of big data, he argued, “Correlation is enough. We can stop looking for models.”

This belief in the power of data, of technology untrammelled by petty human worldviews, is the practical cousin of more metaphysical assertions. A belief in the unquestionability of data leads directly to a belief in the truth of data-derived assertions. And if data contains truth, then it will, without moral intervention, produce better outcomes. Speaking at Google’s private London Zeitgeist conference in 2013, Eric Schmidt, Google Chairman, asserted that “if they had had cellphones in Rwanda in 1994, the genocide would not have happened.” Schmidt’s claim was that technological visibility – the rendering of events and actions legible to everyone – would change the character of those actions. Not only is this statement historically inaccurate (there was plenty of evidence available of what was occurring during the genocide from UN officials, US satellite photographs and other sources), it’s also demonstrably untrue. Analysis of unrest in Kenya in 2007, when over 1,000 people were killed in ethnic conflicts, showed that mobile phones not only spread but accelerated the violence. But you don’t need to look to such extreme examples to see how a belief in technological determinism underlies much of our thinking and reasoning about the world.

“Big data” is not merely a business buzzword, but a way of seeing the world. Driven by technology, markets and politics, it has come to determine much of our thinking, but it is flawed and dangerous. It runs counter to our actual findings when we employ such technologies honestly and with the full understanding of their workings and capabilities. This over-reliance on data, which I call “quantified thinking”, has come to undermine our ability to reason meaningfully about the world, and its effects can be seen across multiple domains.

The assertion is hardly new. Writing in the Dialectic of Enlightenment in 1947, Theodor Adorno and Max Horkheimer decried “the present triumph of the factual mentality” – the predecessor to quantified thinking – and succinctly analysed the big data fallacy, set out by Anderson above. “It does not work by images or concepts, by the fortunate insights, but refers to method, the exploitation of others’ work, and capital … What men want to learn from nature is how to use it in order wholly to dominate it and other men. That is the only aim.” What is different in our own time is that we have built a world-spanning network of communication and computation to test this assertion. While it occasionally engenders entirely new forms of behaviour and interaction, the network most often shows to us with startling clarity the relationships and tendencies which have been latent or occluded until now. In the face of the increased standardisation of knowledge, it becomes harder and harder to argue against quantified thinking, because the advances of technology have been conjoined with the scientific method and social progress. But as I hope to show, technology ultimately reveals its limitations….

“Eroom’s law” – Moore’s law backwards – was recently formulated to describe a problem in pharmacology. Drug discovery has been getting more expensive. Since the 1950s the number of drugs approved for use in human patients per billion US dollars spent on research and development has halved every nine years. This problem has long perplexed researchers. According to the principles of technological growth, the trend should be in the opposite direction. In a 2012 paper in Nature entitled “Diagnosing the decline in pharmaceutical R&D efficiency” the authors propose and investigate several possible causes for this. They begin with social and physical influences, such as increased regulation, increased expectations and the exhaustion of easy targets (the “low hanging fruit” problem). Each of these are – with qualifications – disposed of, leaving open the question of the discovery process itself….(More)

Make Democracy Great Again: Let’s Try Some ‘Design Thinking’


Ken Carbone in the Huffington Post: “Allow me to begin with the truth. I’ve never studied political science, run for public office nor held a position in government. For the last forty years I’ve led a design agency working with enduring brands across the globe. As with any experienced person in my profession, I have used research, deductive reasoning, logic and “design thinking“ to solve complex problems and create opportunities. Great brands that are showing their age turn to our agency to get back on course. In this light, I believe American democracy is a prime target for some retooling….

The present campaign cycle has left many voters wondering how such divisiveness and national embarrassment could be happening in the land of the free and home of the brave. This could be viewed as symptomatic of deeper structural problems in our tradition bound 240 year-old democracy. Great brands operate on a “innovate or die” model to insure success. The continual improvement of how a business operates and adapts to market conditions is a sound and critical practice.

Although the current election frenzy will soon be over, I want to examine three challenges to our election process and propose possible solutions for consideration. I’ll use the same diagnostic thinking I use with major corporations:

Term Limits…

Voting and Voter registration…

Political Campaigns…

In June of this year I attended the annual leadership conference of AIGA, the professional association for design, in Raleigh NC. A provocative question posed to a select group of designers was “What would you do if you were Secretary of Design.” The responses addressed issues concerning positive social change, education and Veteran Affairs. The audience was full of several hundred trained professionals whose everyday problem solving methods encourage divergent thinking to explore many solutions (possible or impossible) and then use convergent thinking to select and realize the best resolution. This is the very definition of “design thinking.” That leads to progress….(More)”.

Is Social Media Killing Democracy?


Phil Howard at Culture Digitally: “This is the big year for computational propaganda—using immense data sets to manipulate public opinion over social media.  Both the Brexit referendum and US election have revealed the limits of modern democracy, and social media platforms are currently setting those limits. 

Platforms like Twitter and Facebook now provide a structure for our political lives.  We’ve always relied on many kinds of sources for our political news and information.  Family, friends, news organizations, charismatic politicians certainly predate the internet.  But whereas those are sources of information, social media now provides the structure for political conversation.  And the problem is that these technologies permit too much fake news, encourage our herding instincts, and aren’t expected to provide public goods.

First, social algorithms allow fake news stories from untrustworthy sources to spread like wildfire over networks of family and friends.  …

Second, social media algorithms provide very real structure to what political scientists often call “elective affinity” or “selective exposure”…

The third problem is that technology companies, including Facebook and Twitter, have been given a “moral pass” on the obligations we hold journalists and civil society groups to….

Facebook has run several experiments now, published in scholarly journals, demonstrating that they have the ability to accurately anticipate and measure social trends.  Whereas journalists and social scientists feel an obligation to openly analyze and discuss public preferences, we do not expect this of Facebook.  The network effects that clearly were unmeasured by pollsters were almost certainly observable to Facebook.  When it comes to news and information about politics, or public preferences on important social questions, Facebook has a moral obligation to share data and prevent computational propaganda.  The Brexit referendum and US election have taught us that Twitter and Facebook are now media companies.  Their engineering decisions are effectively editorial decisions, and we need to expect more openness about how their algorithms work.  And we should expect them to deliberate about their editorial decisions.

There are some ways to fix these problems.  Opaque software algorithms shape what people find in their news feeds.  We’ve all noticed fake news stories, often called clickbait, and while these can be an entertaining part of using the internet, it is bad when they are used to manipulate public opinion.  These algorithms work as “bots” on social media platforms like Twitter, where they were used in both the Brexit and US Presidential campaign to aggressively advance the case for leaving Europe and the case for electing Trump.  Similar algorithms work behind the scenes on Facebook, where they govern what content from your social networks actually gets your attention. 

So the first way to strengthen democratic practices is for academics, journalists, policy makers and the interested public to audit social media algorithms….(More)”.

Transforming government through digitization


Bjarne Corydon, Vidhya Ganesan, and Martin Lundqvist at McKinsey: “By digitizing processes and making organizational changes, governments can enhance services, save money, and improve citizens’ quality of life.

As companies have transformed themselves with digital technologies, people are calling on governments to follow suit. By digitizing, governments can provide services that meet the evolving expectations of citizens and businesses, even in a period of tight budgets and increasingly complex challenges. Our estimates suggest that government digitization, using current technology, could generate over $1 trillion annually worldwide.

Digitizing a government requires attention to two major considerations: the core capabilities for engaging citizens and businesses, and the organizational enablers that support those capabilities (exhibit). These make up a framework for setting digital priorities. In this article, we look at the capabilities and enablers in this framework, along with guidelines and real-world examples to help governments seize the opportunities that digitization offers.

A digital government has core capabilities supported by organizational enablers.

Governments typically center their digitization efforts on four capabilities: services, processes, decisions, and data sharing. For each, we believe there is a natural progression from quick wins to transformative efforts….(More)”

See also: Digital by default: A guide to transforming government (PDF–474KB) and  “Never underestimate the importance of good government,”  a New at McKinsey blog post with coauthor Bjarne Corydon, director of the McKinsey Center for Government.

Portugal has announced the world’s first nationwide participatory budget


Graça Fonseca at apolitical:”Portugal has announced the world’s first participatory budget on a national scale. The project will let people submit ideas for what the government should spend its money on, and then vote on which ideas are adopted.

Although participatory budgeting has become increasingly popular around the world in the past few years, it has so far been confined to cities and regions, and no country that we know of has attempted it nationwide. To reach as many people as possible, Portugal is also examining another innovation: letting people cast their votes via ATM machines.

‘It’s about quality of life, it’s about the quality of public space, it’s about the quality of life for your children, it’s about your life, OK?’ Graça Fonseca, the minister responsible, told Apolitical. ‘And you have a huge deficit of trust between people and the institutions of democracy. That’s the point we’re starting from and, if you look around, Portugal is not an exception in that among Western societies. We need to build that trust and, in my opinion, it’s urgent. If you don’t do anything, in ten, twenty years you’ll have serious problems.’

Although the official window for proposals begins in January, some have already been submitted to the project’s website. One suggests equipping kindergartens with technology to teach children about robotics. Using the open-source platform Arduino, the plan is to let children play with the tech and so foster scientific understanding from the earliest age.

Proposals can be made in the areas of science, culture, agriculture and lifelong learning, and there will be more than forty events in the new year for people to present and discuss their ideas.

The organisers hope that it will go some way to restoring closer contact between government and its citizens. Previous projects have shown that people who don’t vote in general elections often do cast their ballot on the specific proposals that participatory budgeting entails. Moreover, those who make the proposals often become passionate about them, campaigning for votes, flyering, making YouTube videos, going door-to-door and so fuelling a public discussion that involves ever more people in the process.

On the other side, it can bring public servants nearer to their fellow citizens by sharpening their understanding of what people want and what their priorities are. It can also raise the quality of public services by directing them more precisely to where they’re needed as well as by tapping the collective intelligence and imagination of thousands of participants….

Although it will not be used this year, because the project is still very much in the trial phase, the use of ATMs is potentially revolutionary. As Fonseca puts it, ‘In every remote part of the country, you might have nothing else, but you have an ATM.’ Moreover, an ATM could display proposals and allow people to vote directly, not least because it already contains a secure way of verifying their identity. At the moment, for comparison, people can vote by text or online, sending in the number from their ID card, which is checked against a database….(More)”.

Wikipedia’s not as biased as you might think


Ananya Bhattacharya in Quartz: “The internet is as open as people make it. Often, people limit their Facebook and Twitter circles to likeminded people and only follow certain subreddits, blogs, and news sites, creating an echo chamber of sorts. In a sea of biased content, Wikipedia is one of the few online outlets that strives for neutrality. After 15 years in operation, it’s starting to see results

Researchers at Harvard Business School evaluated almost 4,000 articles in Wikipedia’s online database against the same entries in Encyclopedia Brittanica to compare their biases. They focused on English-language articles about US politics, especially controversial topics, that appeared in both outlets in 2012.

“That is just not a recipe for coming to a conclusion,” Shane Greenstein, one of the study’s authors, said in an interview. “We were surprised that Wikipedia had not failed, had not fallen apart in the last several years.”

Greenstein and his co-author Feng Zhu categorized each article as “blue” or “red.” Drawing from research in political science, they identified terms that are idiosyncratic to each party. For instance, political scientists have identified that Democrats were more likely to use phrases such as “war in Iraq,” “civil rights,” and “trade deficit,” while Republicans used phrases such as “economic growth,” “illegal immigration,” and “border security.”…

“In comparison to expert-based knowledge, collective intelligence does not aggravate the bias of online content when articles are substantially revised,” the authors wrote in the paper. “This is consistent with a best-case scenario in which contributors with different ideologies appear to engage in fruitful online conversations with each other, in contrast to findings from offline settings.”

More surprisingly, the authors found that the 2.8 million registered volunteer editors who were reviewing the articles also became less biased over time. “You can ask questions like ‘do editors with red tendencies tend to go to red articles or blue articles?’” Greenstein said. “You find a prevalence of opposites attract, and that was striking.” The researchers even identified the political stance for a number of anonymous editors based on their IP locations, and the trend held steadfast….(More)”

What We Should Mean When We Talk About Citizen Engagement


Eric Gordon in Governing: “…But here’s the problem: The institutional language of engagement has been defined by its measurement. Chief engagement officers in corporations are measuring milliseconds on web pages, and clicks on ads, and not relations among people. This is disproportionately influencing the values of democracy and the responsibility of public institutions to protect them.

Too often, when government talks about engagement, it is talking those things that are measurable, but it is providing mandates to employees imbued with ambiguity. For example, the executive order issued by Mayor Murray in Seattle is a bold directive for the “timely implementation by all City departments of equitable outreach and engagement practices that reaffirm the City’s commitment to inclusive participation.”

This extraordinary mayoral mandate reflects clear democratic values, but it lacks clarity of methods. It reflects a need to use digital technology to enhance process, but it doesn’t explain why. This in no way is meant as a criticism of Seattle’s effort; rather, it is simply meant to illustrate the complexity of engagement in practice. Departments are rewarded for quantifiable efficiency, not relationships. Just because something is called engagement, this fundamental truth won’t change.

Government needs to be much more clear about what it really means when it talks about engagement. In 2015, Living Cities and the Citi Foundation launched the City Accelerator on Public Engagement, which was an effort to source and support effective practices of public engagement in city government. This 18-month project, based on a cohort of five cities throughout the United States, is just now coming to an end. Out of it came several lasting insights, one of which I will share here. City governments are institutions in transition that need to ask why people should care.

After the election, who is going to care about government? How do you get people to care about the services that government provides? How do you get people to care about the health outcomes in their neighborhoods? How do you get people to care about ensuring accessible, high-quality public education?

I want to propose that when government talks about civic engagement, it is really talking about caring. When you care about something, you make a decision to be attentive to that thing. But “caring about” is one end of what I’ll call a spectrum of caring. On the other end, there is “caring for,” when, as described by philosopher Nel Noddings, “what we do depends not upon rules, or at least not wholly on rules — not upon a prior determination of what is fair or equitable — but upon a constellation of conditions that is viewed through both the eyes of the one-caring and the eyes of the cared-for.”

In short, caring-for is relational. When one cares for another, the outcomes of an encounter are not predetermined, but arise through relation….(More)”.

The Participatory Condition in the Digital Age


Book edited by Darin Barney, Gabriella Coleman, Christine Ross, Jonathan Sterne, and Tamar Tembeck:

The Participatory Condition in the Digital Age

“Just what is the “participatory condition”? It is the situation in which taking part in something with others has become both environmental and normative. The fact that we have always participated does not mean we have always lived under the participatory condition. What is distinctive about the present is the extent to which the everyday social, economic, cultural, and political activities that comprise simply being in the world have been thematized and organized around the priority of participation.

Structured along four axes investigating the relations between participation and politics, surveillance, openness, and aesthetics, The Participatory Condition in the Digital Age comprises fifteen essays that explore the promises, possibilities, and failures of contemporary participatory media practices as related to power, Occupy Wall Street, the Arab Spring uprisings, worker-owned cooperatives for the post-Internet age; paradoxes of participation, media activism, open source projects; participatory civic life; commercial surveillance; contemporary art and design; and education.

This book represents the most comprehensive and transdisciplinary endeavor to date to examine the nature, place, and value of participation in the digital age. Just as in 1979, when Jean-François Lyotard proposed that “the postmodern condition” was characterized by the questioning of historical grand narratives, The Participatory Condition in the Digital Age investigates how participation has become a central preoccupation of our time….(More)”

Big Data Is Not a Monolith


Book edited by Cassidy R. Sugimoto, Hamid R. Ekbia and Michael Mattioli: “Big data is ubiquitous but heterogeneous. Big data can be used to tally clicks and traffic on web pages, find patterns in stock trades, track consumer preferences, identify linguistic correlations in large corpuses of texts. This book examines big data not as an undifferentiated whole but contextually, investigating the varied challenges posed by big data for health, science, law, commerce, and politics. Taken together, the chapters reveal a complex set of problems, practices, and policies.

The advent of big data methodologies has challenged the theory-driven approach to scientific knowledge in favor of a data-driven one. Social media platforms and self-tracking tools change the way we see ourselves and others. The collection of data by corporations and government threatens privacy while promoting transparency. Meanwhile, politicians, policy makers, and ethicists are ill-prepared to deal with big data’s ramifications. The contributors look at big data’s effect on individuals as it exerts social control through monitoring, mining, and manipulation; big data and society, examining both its empowering and its constraining effects; big data and science, considering issues of data governance, provenance, reuse, and trust; and big data and organizations, discussing data responsibility, “data harm,” and decision making….(More)”