Understanding democracy as a product of citizen performances reduces the need for a defined ‘people’


Liron Lavi at Democratic Audit: “Dēmokratía, literally ‘the rule of the people’, is the basis for democracy as a political regime. However, ‘the people’ is a heterogeneous, open, and dynamic entity. So, how can we think about democracy without the people as a coherent entity, yet as the source of democracy? I employ a performative theorisation of democracy in order to answer this question. Democracy, I suggest, is an effect produced by repetitive performative acts and ‘the people’ is produced as the source of democratic sovereignty.

A quick search on ‘democratic performance’ will usually yield results (and concerns) regarding voter competence, government accountability, liberal values, and legitimacy. However, from the perspective of performative theory, the term gains a rather different meaning (as has been discussed at length by Judith Butler). It suggests that democracy is not a pre-given structure but rather needs to be constructed repeatedly. Thus, for a democracy to be recognised and maintained as such it needs to be performed by citizens, institutions, office-holders, the media, etc. Acts made by these players – voting, demonstrating, decision- and- law-making, etc. – give form to the abstract concept of democracy, thus producing it as their (imagined) source. There is, therefore, no finite set of actions that can determine once and for all that a social structure is indeed a democracy, for the regime is not a stable and pre-given structure, but rather produced and imagined through a multitude of acts and procedures.

Elections, for example, are a democratic performance insofar as they are perceived as an effective tool for expressing the public’s preferences and choosing its representatives and desired policies. Polling stations are therefore the site in which democracy is constituted insofar as all eligible members (can) participate in the act of voting, and therefore are constructed as the source of sovereignty. By this, elections produce democracy as their effect, as their source, and hold together the political imagination of democracy. And they do this periodically, thus open options for new variations (and failures) in the democratic effect they produce. Elections are therefore, not only an opportunity to replace representatives and incumbents, but also an opportunity to perform democracy, shape it, alter it, and load it with various meanings….(More)”

Researchers wrestle with a privacy problem


Erika Check Hayden at Nature: “The data contained in tax returns, health and welfare records could be a gold mine for scientists — but only if they can protect people’s identities….In 2011, six US economists tackled a question at the heart of education policy: how much does great teaching help children in the long run?

They started with the records of more than 11,500 Tennessee schoolchildren who, as part of an experiment in the 1980s, had been randomly assigned to high- and average-quality teachers between the ages of five and eight. Then they gauged the children’s earnings as adults from federal tax returns filed in the 2000s. The analysis showed that the benefits of a good early education last for decades: each year of better teaching in childhood boosted an individual’s annual earnings by some 3.5% on average. Other data showed the same individuals besting their peers on measures such as university attendance, retirement savings, marriage rates and home ownership.

The economists’ work was widely hailed in education-policy circles, and US President Barack Obama cited it in his 2012 State of the Union address when he called for more investment in teacher training.

But for many social scientists, the most impressive thing was that the authors had been able to examine US federal tax returns: a closely guarded data set that was then available to researchers only with tight restrictions. This has made the study an emblem for both the challenges and the enormous potential power of ‘administrative data’ — information collected during routine provision of services, including tax returns, records of welfare benefits, data on visits to doctors and hospitals, and criminal records. Unlike Internet searches, social-media posts and the rest of the digital trails that people establish in their daily lives, administrative data cover entire populations with minimal self-selection effects: in the US census, for example, everyone sampled is required by law to respond and tell the truth.

This puts administrative data sets at the frontier of social science, says John Friedman, an economist at Brown University in Providence, Rhode Island, and one of the lead authors of the education study “They allow researchers to not just get at old questions in a new way,” he says, “but to come at problems that were completely impossible before.”….

But there is also concern that the rush to use these data could pose new threats to citizens’ privacy. “The types of protections that we’re used to thinking about have been based on the twin pillars of anonymity and informed consent, and neither of those hold in this new world,” says Julia Lane, an economist at New York University. In 2013, for instance, researchers showed that they could uncover the identities of supposedly anonymous participants in a genetic study simply by cross-referencing their data with publicly available genealogical information.

Many people are looking for ways to address these concerns without inhibiting research. Suggested solutions include policy measures, such as an international code of conduct for data privacy, and technical methods that allow the use of the data while protecting privacy. Crucially, notes Lane, although preserving privacy sometimes complicates researchers’ lives, it is necessary to uphold the public trust that makes the work possible.

“Difficulty in access is a feature, not a bug,” she says. “It should be hard to get access to data, but it’s very important that such access be made possible.” Many nations collect administrative data on a massive scale, but only a few, notably in northern Europe, have so far made it easy for researchers to use those data.

In Denmark, for instance, every newborn child is assigned a unique identification number that tracks his or her lifelong interactions with the country’s free health-care system and almost every other government service. In 2002, researchers used data gathered through this identification system to retrospectively analyse the vaccination and health status of almost every child born in the country from 1991 to 1998 — 537,000 in all. At the time, it was the largest study ever to disprove the now-debunked link between measles vaccination and autism.

Other countries have begun to catch up. In 2012, for instance, Britain launched the unified UK Data Service to facilitate research access to data from the country’s census and other surveys. A year later, the service added a new Administrative Data Research Network, which has centres in England, Scotland, Northern Ireland and Wales to provide secure environments for researchers to access anonymized administrative data.

In the United States, the Census Bureau has been expanding its network of Research Data Centers, which currently includes 19 sites around the country at which researchers with the appropriate permissions can access confidential data from the bureau itself, as well as from other agencies. “We’re trying to explore all the available ways that we can expand access to these rich data sets,” says Ron Jarmin, the bureau’s assistant director for research and methodology.

In January, a group of federal agencies, foundations and universities created the Institute for Research on Innovation and Science at the University of Michigan in Ann Arbor to combine university and government data and measure the impact of research spending on economic outcomes. And in July, the US House of Representatives passed a bipartisan bill to study whether the federal government should provide a central clearing house of statistical administrative data.

Yet vast swathes of administrative data are still inaccessible, says George Alter, director of the Inter-university Consortium for Political and Social Research based at the University of Michigan, which serves as a data repository for approximately 760 institutions. “Health systems, social-welfare systems, financial transactions, business records — those things are just not available in most cases because of privacy concerns,” says Alter. “This is a big drag on research.”…

Many researchers argue, however, that there are legitimate scientific uses for such data. Jarmin says that the Census Bureau is exploring the use of data from credit-card companies to monitor economic activity. And researchers funded by the US National Science Foundation are studying how to use public Twitter posts to keep track of trends in phenomena such as unemployment.

 

….Computer scientists and cryptographers are experimenting with technological solutions. One, called differential privacy, adds a small amount of distortion to a data set, so that querying the data gives a roughly accurate result without revealing the identity of the individuals involved. The US Census Bureau uses this approach for its OnTheMap project, which tracks workers’ daily commutes. ….In any case, although synthetic data potentially solve the privacy problem, there are some research applications that cannot tolerate any noise in the data. A good example is the work showing the effect of neighbourhood on earning potential3, which was carried out by Raj Chetty, an economist at Harvard University in Cambridge, Massachusetts. Chetty needed to track specific individuals to show that the areas in which children live their early lives correlate with their ability to earn more or less than their parents. In subsequent studies5, Chetty and his colleagues showed that moving children from resource-poor to resource-rich neighbourhoods can boost their earnings in adulthood, proving a causal link.

Secure multiparty computation is a technique that attempts to address this issue by allowing multiple data holders to analyse parts of the total data set, without revealing the underlying data to each other. Only the results of the analyses are shared….(More)”

Personalising data for development


Wolfgang Fengler and Homi Kharas in the Financial Times: “When world leaders meet this week for the UN’s general assembly to adopt the Sustainable Development Goals (SDGs), they will also call for a “data revolution”. In a world where almost everyone will soon have access to a mobile phone, where satellites will take high-definition pictures of the whole planet every three days, and where inputs from sensors and social media make up two thirds of the world’s new data, the opportunities to leverage this power for poverty reduction and sustainable development are enormous. We are also on the verge of major improvements in government administrative data and data gleaned from the activities of private companies and citizens, in big and small data sets.

But these opportunities are yet to materialize in any scale. In fact, despite the exponential growth in connectivity and the emergence of big data, policy making is rarely based on good data. Almost every report from development institutions starts with a disclaimer highlighting “severe data limitations”. Like castaways on an island, surrounded with water they cannot drink unless the salt is removed, today’s policy makers are in a sea of data that need to be refined and treated (simplified and aggregated) to make them “consumable”.

To make sense of big data, we used to depend on data scientists, computer engineers and mathematicians who would process requests one by one. But today, new programs and analytical solutions are putting big data at anyone’s fingertips. Tomorrow, it won’t be technical experts driving the data revolution but anyone operating a smartphone. Big data will become personal. We will be able to monitor and model social and economic developments faster, more reliably, more cheaply and on a far more granular scale. The data revolution will affect both the harvesting of data through new collection methods, and the processing of data through new aggregation and communication tools.

In practice, this means that data will become more actionable by becoming more personal, more timely and more understandable. Today, producing a poverty assessment and poverty map takes at least a year: it involves hundreds of enumerators, lengthy interviews and laborious data entry. In the future, thanks to hand-held connected devices, data collection and aggregation will happen in just a few weeks. Many more instances come to mind where new and higher-frequency data could generate development breakthroughs: monitoring teacher attendance, stocks and quality of pharmaceuticals, or environmental damage, for example…..

Despite vast opportunities, there are very few examples that have generated sufficient traction and scale to change policy and behaviour and create the feedback loops to further improve data quality. Two tools have personalised the abstract subjects of environmental degradation and demography (see table):

  • Monitoring forest fires. The World Resources Institute has launched Global Forest Watch, which enables users to monitor forest fires in near real time, and overlay relevant spatial information such as property boundaries and ownership data to be developed into a model to anticipate the impact on air quality in affected areas in Indonesia, Singapore and Malaysia.
  • Predicting your own life expectancy. The World Population Program developed a predictive tool – www.population.io – showing each person’s place in the distribution of world population and corresponding statistical life expectancy. In just a few months, this prototype attracted some 2m users who shared their results more than 25,000 times on social media. The traction of the tool resulted from making demography personal and converting an abstract subject matter into a question of individual ranking and life expectancy.

A new Global Partnership for Sustainable Development Data will be launched at the time of the UN General Assembly….(More)”

Addressing Inequality and the ‘Data Divide’


Daniel Castro at the US Chamber of Commerce Foundation: “In the coming years, communities across the nation will increasingly rely on data to improve quality of life for their residents, such as by improving educational outcomes, reducing healthcare costs, and increasing access to financial services. However, these opportunities require that individuals have access to high-quality data about themselves and their communities. Should certain individuals or communities not routinely have data about them collected, distributed, or used, they may suffer social and economic consequences. Just as the digital divide has held back many communities from reaping the benefits of the modern digital era, a looming “data divide” threatens to stall the benefits of data-driven innovation for a wide swathe of America. Given this risk, policymakers should make a concerted effort to combat data poverty.

Data already plays a crucial role in guiding decision making, and it will only become more important over time. In the private sector, businesses use data for everything from predicting inventory demand to responding to customer feedback to determining where to open new stores. For example, an emerging group of financial service providers use non-traditional data sources, such as an individual’s social network, to assess credit risk and make lending decisions. And health insurers and pharmacies are offering discounts to customers who use fitness trackers to monitor and share data about their health. In the public sector, data is at the heart of important efforts like improving patient safety, cutting government waste, and helping children succeed in school. For example, public health officials in states like Indiana and Maryland have turned to data science in an effort to reduce infant mortality rates.

Many of these exciting advancements are made possible by a new generation of technologies that make it easier to collect, share, and disseminate data. In particular, the Internet of Everything is creating a plethora of always-on devices that record and transmit a wealth of information about our world and the people and objects in it. Individuals are using social media to create a rich tapestry of interactions tied to particular times and places. In addition, government investments in critical data systems, such as statewide databases to track healthcare spending and student performance over time, are integral to efforts to harness data for social good….(More)”

Can Yelp Help Government Win Back the Public’s Trust?


Tod Newcombe at Governing: “Look out, DMV, IRS and TSA. Yelp, the popular review website that’s best known for its rants or cheers regarding restaurants and retailers, is about to make it easier to review and rank government services.

Last month, Yelp and the General Services Administration (GSA), which manages the basic functions of the federal government, announced that government workers will soon be able to read and respond to their agencies’ Yelp reviews — and, hopefully, incorporate the feedback into service improvements.

At first glance, the news might not seem so special. There already are Yelp pages for government agencies like Departments of Motor Vehicles, which have been particularly popular. San Francisco’s DMV office, for example, has received more than 450 reviews and has a three-star rating. But federal agencies and workers haven’t been allowed to respond to the reviewers nor could they collect data from the pages because Yelp hadn’t been approved by the GSA. The agreement changes that situation, also making it possible for agencies to set up new Yelp pages….

Yelp has been posting online reviews about restaurants, bars, nail salons and other retailers since 2004. Despite its reputation as a place to vent about bad service, more than two-thirds of the 82 million reviews posted since Yelp started have been positive with most rated at either four or five stars, according to the company’s website. And when businesses boost their Yelp rating by one star, revenues have increased by as much as 9 percent, according to a 2011 study by Harvard Business School Professor Michael Luca.

Now the public sector is about to start paying more attention to those rankings. More importantly, they will find out if engaging the public in a timely fashion changes their perception of government.

While all levels of government have become active with social media, direct interaction between an agency and citizens is still the exception rather than the rule. Agencies typically use Facebook and Twitter to inform followers about services or to provide information updates, not as a feedback mechanism. That’s why having a more direct connection between the comments on a Yelp page and a government agency represents a shift in engagement….(More)”

The tools of social change: A critique of techno-centric development and activism


Paper by Jan Servaes and Rolien Hoyng in New Media and Society: “Generally, the literatures on Information and Communication Technologies for Development (ICT4D) and on networked resistance are evolving isolated from one another. This article aims to integrate these literatures in order to critically review differences and similarities in the techno-centric conceptions of agency and social change by political adversaries that are rooted in their socio-technical practices. We repurpose the critique of technological determinism to develop a multi-layered conception of agency that contains three interrelated dimensions: (1) “access” versus “skill” and the normative concept of inclusion; (2) fixed “system” versus “open-ended network” and savoir vivre; and (3) “institution” versus “extra-institutional network” and political efficacy. Building on our critique, we end by exploring the political possibilities at the intersections of conventional institutions or communities and emerging, extra-institutional networked formations…(More)”

Civic Jazz in the New Maker Cities


 at Techonomy: “Our civic innovation movement is about 6 years old.  It began when cities started opening up data to citizens, journalists, public-sector companies, non-profits, and government agencies.  Open data is an invitation: it’s something to go to work on— both to innovate and to create a more transparent environment about what works and what doesn’t.  I remember when we first opened data in SF and began holding conferences and hackathons. In short order we saw a community emerge with remarkable capacity to contribute to, tinker with, hack, explore and improve the city.

Early on this took the form of visualizing data, like crime patterns in Oakland. This was followed by engagement: “Look, the police are skating by and not enforcing prostitution laws. Lets call them on it!”   Civic hackathons brought together journalists, software developers, hardware people, and urbanists. I recall when artists teamed with the Arup engineering firm to build noise sensors and deployed them in the Tenderloin neighborhood (with absolutely no permission from anybody). Noise was an issue. How could you understand the problem unless you measured it?

Something as wonky as an API invited people in, at which point a sense of civic possibility and wonder set in. Suddenly whole swaths of the city were working on the city.  During the SF elections four years ago Gray Area Foundation for the Arts (which I chair) led a project with candidates, bureaucrats, and hundreds of volunteers for a summer-long set of hackathons and projects. We were stunned so many people would come together and collaborate so broadly. It was a movement, fueled by a sense of agency and informed by social media. Today cities are competing on innovation. It has become a movement.

All this has been accelerated by startups, incubators, and the economy’s whole open innovation conversation.  Remarkably, we now see capital from flowing in to support urban and social ventures where we saw none just a few years ago. The accelerator Tumml in SF is a premier example, but there are similar efforts in many cities.

This initial civic innovation movement was focused on apps and data, a relatively easy place to start. With such an approach you’re not contending for real estate or creating something that might gentrify neighborhoods. Today this movement is at work on how we design the city itself.  As millennials pour in and cities are where most of us live, enormous experimentation is at play. Ours is a highly interdisciplinary age, mixing new forms of software code and various physical materials, using all sorts of new manufacturing techniques.

Brooklyn is a great example.  A few weeks ago I met with Bob Bland, CEO of Manufacture New York. This ambitious 160,000 square foot public/private partnership is reimagining the New York fashion business. In one place it co-locates contract manufacturers, emerging fashion brands and advanced fashion research. Think wearables, sensors, smart fabrics, and the application of advanced manufacturing to fashion. By bringing all these elements under one roof, the supply chain can be compressed, sped-up, and products made more innovative.

New York City’s Economic Development office envisions a local urban supply chain that can offer a scalable alternative to the giant extended global one. In fashion it makes more and more sense for brands to be located near their suppliers. Social media speeds up fashion cycles, so we’re moving beyond predictable seasons and looks specified ahead of time. Manufacturers want to place smaller orders more frequently, so they can take less inventory risk and keep current with trends.

When you put so much talent in one space, creativity flourishes. In fashion, unlike tech, there isn’t a lot of IP protection. So designers can riff off each other’s idea and incorporate influences as artists do. What might be called stealing ideas in the software business is seen in fashion as jazz and a way to create a more interesting work environment.

A few blocks away is the Brooklyn Navy Yard, a mammoth facility at the center of New York’s emerging maker economy. …In San Francisco this urban innovation movement is working on the form of the city itself. Our main boulevard, Market Street, is to be reimagined, repaved, and made greener with far fewer private vehicles over the next two years. Our planning department, in concert with art organizations here, has made citizen-led urban prototyping the centerpiece of the planning process….(More)”

Public service coding: the BBC as an open software developer


Juan Mateos-Garcia at NESTA: “On Monday, the BBC published British, Bold, Creative, a paper where it put forward a vision for its future based on openness and collaboration with its audiences and the UK’s wider creative industries.

In this blog post, we focus on an area where the BBC is already using an open and collaborative model for innovation: software development.

The value of software

Although less visible to the public than its TV, radio and online content programming, the BBC’s software development activities may create value and drive innovation beyond the BBC, providing an example of how the corporation can put its “technology and digital capabilities at the service of the wider industry.

Software is an important form of innovation investment that helps the BBC deliver new products and services, and become more efficient. One might expect that much of the software developed by the BBC would also be of value to other media and digital organisations. Such beneficial “spillovers” are encouraged by the BBC’s use of open source licensing, which enables other organisations to download its software for free, change it as they see fit, and share the results.

Current debates about the future of the BBC – including the questions about its role in influencing the future technology landscape in the Government’s Charter Review Consultation – need to be informed by robust evidence about how it develops software, and the impact that this has.

In this blog post, we use data from the world’s biggest collaborative software development platform, GitHub, to study the BBC as an open software developer.

GitHub gives organisations and individuals hosting space to store their projects (referred to as “repos”), and tools to coordinate development. This includes the option to “fork” (copy) other users’ software, change it and redistribute the improvements. Our key questions are:

  • How active is the BBC on GitHub?
  • How has its presence on GitHub changed over time?
  • What is the level of adoption (forking) of BBC projects on GitHub?
  • What types of open source projects is the BBC developing?
  • Where in the UK and in the rest of the world are the people interested in BBC projects based?

But before tackling these questions, it is important to address a question often raised in relation to open source software:

Why might an organisation like the BBC want to share its valuable code on a platform like GitHub?

There are several possible reasons:

  • Quality: Opening up a software project attracts help from other developers, making it better
  • Adoption: Releasing software openly can help turn it into a widely adopted standard
  • Signalling: It signals the organisation as an interesting place to work and partner with
  • Public value: Some organisations release their code openly with the explicit goal of creating public value

The webpage introducing TAL (Television Application Layer), a BBC project on GitHub, is a case in point: “Sharing TAL should make building applications on TV easier for others, helping to drive the uptake of this nascent technology. The BBC has a history of doing this and we are always looking at new ways to reach our audience.”…(More)

The impact of Open Data


GovLab/Omidyar Network: “…share insights gained from our current collaboration with Omidyar Network on a series of open data case studies. These case studies – 19, in total – are designed to provide a detailed examination of the various ways open data is being used around the world, across geographies and sectors, and to draw some over-arching lessons. The case studies are built from extensive research, including in-depth interviews with key participants in the various open data projects under study….

Ways in which open data impacts lives

Broadly, we have identified four main ways in which open data is transforming economic, social, cultural and political life, and hence improving people’s lives.

  • First, open data is improving government, primarily by helping tackle corruption, improving transparency, and enhancing public services and resource allocation.
  • Open data is also empowering citizens to take control of their lives and demand change; this dimension of impact is mediated by more informed decision making and new forms of social mobilization, both facilitated by new ways of communicating and accessing information.
  • Open data is also creating new opportunities for citizens and groups, by stimulating innovation and promoting economic growth and development.
  • Finally, open data is playing an increasingly important role insolving big public problems, primarily by allowing citizens and policymakers to engage in new forms of data-driven assessment and data-driven engagement.

 

Enabling Conditions

While these are the four main ways in which open data is driving change, we have seen wide variability in the amount and nature of impact across our case studies. Put simply, some projects are more successful than others; or some projects might be more successful in a particular dimension of impact, and less successful in others.

As part of our research, we have therefore tried to identify some enabling conditions that maximize the positive impact of open data projects. These four stand out:

  • Open data projects are most successful when they are built not from the efforts of single organizations or government agencies, but when they emerge from partnerships across sectors (and even borders). The role of intermediaries (e.g., the media and civil society groups) and “data collaboratives” are particularly important.
  • Several of the projects we have seen have emerged on the back of what we might think of as an open data public infrastructure– i.e., the technical backend and organizational processes necessary to enable the regular release of potentially impactful data to the public.
  • Clear open data policies, including well-defined performance metrics, are also essential; policymakers and political leaders have an important role in creating an enabling (yet flexible) legal environment that includes mechanisms for project assessments and accountability, as well as providing the type high-level political buy-in that can empower practitioners to work with open data.
  • We have also seen that the most successful open data projects tend to be those that target a well-defined problem or issue. In other words, projects with maximum impact often meet a genuine citizen need.

 

Challenges

Impact is also determined by the obstacles and challenges that a project confronts. Some regions and some projects face a greater number of hurdles. These also vary, but we have found four challenges that appear most often in our case studies:

  • Projects in countries or regions with low capacity or “readiness”(indicated, for instance by low Internet penetration rates or hostile political environments) typically fare less well.
  • Projects that are unresponsive to feedback and user needs are less likely to succeed than those that are flexible and able to adapt to what their users want.
  • Open data often exists in tension with risks such as privacy and security; often, the impact of a project is limited or harmed when it fails to take into account and mitigate these risks.
  • Although open data projects are often “hackable” and cheap to get off the ground, the most successful do require investments – of time and money – after their launch; inadequate resource allocation is one of the most common reasons for a project to fail.

These lists of impacts, enabling factors and challenges are, of course, preliminary. We continue to refine our research and will include a final set of findings along with our final report….(More)

Open Budget Data: Mapping the Landscape


Jonathan Gray at Open Knowledge: “We’re pleased to announce a new report, “Open Budget Data: Mapping the Landscape” undertaken as a collaboration between Open Knowledge, the Global Initiative for Financial Transparency and the Digital Methods Initiative at the University of Amsterdam.

Download the PDF.

The report offers an unprecedented empirical mapping and analysis of the emerging issue of open budget data, which has appeared as ideals from the open data movement have begun to gain traction amongst advocates and practitioners of financial transparency.

In the report we chart the definitions, best practices, actors, issues and initiatives associated with the emerging issue of open budget data in different forms of digital media.

In doing so, our objective is to enable practitioners – in particular civil society organisations, intergovernmental organisations, governments, multilaterals and funders – to navigate this developing field and to identify trends, gaps and opportunities for supporting it.

How public money is collected and distributed is one of the most pressing political questions of our time, influencing the health, well-being and prospects of billions of people. Decisions about fiscal policy affect everyone-determining everything from the resourcing of essential public services, to the capacity of public institutions to take action on global challenges such as poverty, inequality or climate change.

Digital technologies have the potential to transform the way that information about public money is organised, circulated and utilised in society, which in turn could shape the character of public debate, democratic engagement, governmental accountability and public participation in decision-making about public funds. Data could play a vital role in tackling the democratic deficit in fiscal policy and in supporting better outcomes for citizens….(More)”