New paper by Knud Böhle and Ulrich Riehm in First Monday: “The implementation of e–petition systems holds the promise to increase the participative and deliberative potential of petitions. The most ambitious e–petition systems allow for electronic submission, make publicly available the petition text, related documents and the final decision, allow supporting a petition by electronically co–signing it, and provide electronic discussion forums. Based on a comprehensive survey (2010/2011) of parliamentary petition bodies at the national level covering the 27 member states of the European Union (EU) plus Norway and Switzerland, the state of public e–petitioning in the EU is presented, and the relevance of e–petition systems as a means of political participation is discussed….
The most interesting finding is that some petition systems — by leveraging the potential of the Internet — further the involvement of the public considerably. This happens in two ways: first by nudging e–petition systems in the direction of lightweight instruments of direct democracy and second by making the institution itself more open, transparent, accountable, effective, and responsive through the involvement of the public. Both development paths might also lead to expectations that eventually cannot be complied with by the petition body without more substantial transformations of the institution. This or that might happen. Empirically, we ain’t seen almost nothing yet.”
Targeting Transparency
New paper by David Weil, Mary Graham, and Archon Fung in Science Magazine: “When rules, taxes, or subsidies prove impractical as policy tools, governments increasingly employ “targeted transparency,” compelling disclosure of information as an alternative means of achieving specific objectives. For example, the U.S. Affordable Care Act of 2010 requires calories be posted on menus to enlist both restaurants and patrons in the effort to reduce obesity. It is crucial to understand when and how such targeted transparency works, as well as when it is inappropriate. Research about its use and effectiveness has begun to take shape, drawing on social and behavioral scientists, economists, and legal scholars. We explore questions central to the performance of targeted transparency policies.
Targeted transparency differs from broader “right-to-know” and “open-government” policies that span from the 1966 Freedom of Information Act to the Obama Administration’s “open-government” initiative encouraging officials to make existing data sets readily available and easy to parse as an end in itself (1, 2). Targeted transparency offers a more focused approach often used to introduce new scientific evidence of public risks into market choices. Government compels companies or agencies to disclose information in standardized formats to reduce specific risks, to ameliorate externalities arising from a failure of consumers or producers to fully consider social costs associated with a product, or to improve provision of public goods and services. Such policies are more light-handed than conventional regulation, relying on the power of information rather than on enforcement of rules and standards or financial inducements….”
See also the Transparency Policy Project at http://transparencypolicy.net/
Does transparency lead to trust? Some evidence on the subject.
Tiago Peixoto at DemocracySpot: “As open government gains traction in the international agenda, it is increasingly common to come across statements that assume a causal relationship in which transparency leads to trust in government. But to what extent are claims that transparency leads to trust backed up by evidence?
Judging from some recent publications on the subject, such a relationship is not as straightforward as sadvocates would like. In fact, in a number of cases, the evidence points in another direction: that is, transparency may ultimately decrease trust.
Below is a brief overview of research that has been carried out on the subject…
Surely, transparency remains an essential – although quite insufficient – ingredient of accountability. On the trust issue, one could easily think of a number of scenarios in which it is actually better that citizens do not trust their governments. In fact, systems of checks and balances and oversight institutions are not specifically conceived under the logic of trust. Quite on the contrary, such institutional designs assume some level of suspicion vis-à-vis governments: as put in the Federalist Paper No. 51, “If angels were to govern men, neither external nor internal controls on government would be necessary.”
Granted, in some cases a perfect world in which citizens trust their governments may well be desirable. It may even be that transparency leads – in the long run – to increased trust: a great way to sell transparency to governments. But if we want to walk the talk of evidence-based policymaking, we may consider dropping the trust rhetoric. At least for now.”
Technocracy within Representative Democracy
Christina Ribbhagen’s new paper on “Technocracy within Representative Democracy: Technocratic Reasoning and Justification among Bureaucrats and Politicians”: ” How can you possibly have ‘Technocracy within Representative Democracy’, as suggested in the title of this thesis? Shouldn’t the correct title be ‘Technocracy or Representative Democracy’, the sceptic might ask? Well, if technocracy is strictly defined, as rule by an elite of (technical) experts, the sceptic obviously has a point. Democracy means rule by the people (demos) and not rule by (technical) experts. However, in tune with Laird (1990; see also Fischer 2000), I argue that merely establishing the absence of a simple technocratic ruling class is only half the story; instead a more subtle interpretation of technocracy is needed.
Laird (1990, p. 51) continues his story by stating that: ‘The problem of technocracy is the problem of power relations and how those relations are affected by the importance of esoteric knowledge in modern society. The idea that such knowledge is important is correct. The idea that it is important because it leads to the rise of a technically skilled ruling class is mistaken. The crucial issue is not who gains power but who loses it. Technocracy is not the rise of experts, it is the decline of citizens’. Or as formulated by Fischer (2000), ‘One of the most important contemporary functions of technocratic politics, it can be argued, rests not so much on its ascent to power (in the traditional sense of the term) as on the fact that its growing influence shields the elites from political pressure from below’. The crucial issue for the definition of technocracy then is not who governs, rather it lies in the mode of politics. As argued by Fischer (2000), too often writers have dismissed the technocratic thesis on the grounds that experts remain subordinate to top-level economic and political elites. A consequence of this, he continues, is that this argument ‘overlooks the less visible discursive politics of technocratic expertise. Not only does the argument fail to appreciate the way this technical, instrumental mode of inquiry has come to shape our thinking about public problems, but it neglects the ways these modes of thought have become implicitly embedded in our institutional discourses and practices’ (p. 17). Thus, technocracy here should not be understood as ‘rule by experts’, but rather ‘government by technique’ focusing on the procedures and content of politics, suggesting that technocratic reasoning and justification has gained ground and dominates the making of public policy (Boswell, 2009; Fischer, 1990; Meynaud, 1969; Radaelli, 1999b;). To be sure, indirectly this will have consequences as to who will win or lose power. A policy issue or process that is technocratically framed is likely to disempower those lacking information and expertise within the area (Fischer, 1990; Laird, 19903), while supplying those with information and expertise with a ‘technocratic key’ (Uhrwing, 2001) leading to the door of political power.”
The "audience" as participative, idea generating, decision making citizens: will they transform government?
Filling Power Vacuums in the New Global Legal Order
Paper by Anne-Marie Slaughter in the latest issue of Boston College Law Review: “In her Keynote Address at the October, 12, 2012 Symposium, Filling Power Vacuums in the New Global Legal Order, Anne-Marie Slaughter describes the concepts of “power over” and “power with” in the global world of law. Power over is the ability to achieve the outcomes you want by commanding or manipulating others. Power with is the ability to mobilize people to do things. In the globalized world, power operates much more through power with than through power over. In contrast to the hierarchical power of national governments, globally it is more important to be central in the horizontal system of multiple sovereigns. This Address illustrates different examples of power over and power with. It concludes that in this globalized world, lawyers are ideally trained and positioned to exercise power.”
How legislatures work – and should work – as public space
Paper by John Parkinson in latest issue of Democratization: “In a democracy, legislatures are not only stages for performances by elected representatives; they are also stages for performances by other players in the public sphere. This article argues that while many legislatures are designed and built as spaces for the public to engage with politics, and while democratic norms require some degree of access, increasingly what are termed “purposive publics” are being superseded by groups who are only publics in an aggregative, accidental sense. The article begins with a conceptual analysis of the ways in which legislatures can be thought of as public spaces, and the in-principle access requirements that follow from them. It then draws on interviews and observational fieldwork in eleven capital cities to discover whether the theoretical requirements are met in practice, revealing further tensions. The conclusions are that accessibility is important; is being downgraded in important ways; but also that access norms stand in tension with the requirement that legislatures function as working buildings if they are to retain their symbolic value. The article ends with two “modest proposals”, one concerning the design of the plazas in front of legislatures, the other concerning a role for the wider public in legislative procedure.”
OECD: Open Government – Data Towards Empirical Analysis of Open Government Data Initiatives
OECDiLibrary: “Open Government Data (OGD) initiatives, and in particular the development of OGD portals, have proliferated since the mid-2000s both at central and local government levels in OECD and non OECD countries. Understanding the preconditions that enable the efficient and effective implementation of these initiatives is essential for achieving their overall objectives. This is especially true in terms of the role played by OGD in relation to Open Government policies in general.
This paper highlights the main principles, concepts and criteria framing open government data initiatives and the issues challenging their implementation. It underlines the opportunities that OGD and data analytics may offer policy makers, while providing a note of caution on the challenges this agenda poses for the public sector.
Finally, the overall analysis of key concepts and issues aims to pave the way for an empirical analysis of OGD initiatives. So far, little has been done to analyse and prove the impact and accrued value of these initiatives. The paper suggests a methodology comprising an analytical framework for OGD initiatives (to be applied to ex post and ex ante analysis of initiatives) and a related set of data to be collected across OECD countries. The application of the analytical framework and the collection of data would enable the acquisition of a solid body of evidence that could ultimately lead to mapping initiatives across OECD countries (i.e. a typography of initiatives) and developing a common set of metrics to consistently assess impact and value creation within and across countries.”
Why Big Data Is Not Truth
Quentin Hardy in the New York Times: “Kate Crawford, a researcher at Microsoft Research, calls the problem “Big Data fundamentalism — the idea with larger data sets, we get closer to objective truth.” Speaking at a conference in Berkeley, Calif., on Thursday, she identified what she calls “six myths of Big Data.”
Myth 1: Big Data is New
In 1997, there was a paper that discussed the difficulty of visualizing Big Data, and in 1999, a paper that discussed the problems of gaining insight from the numbers in Big Data. That indicates that two prominent issues today in Big Data, display and insight, had been around for awhile…..
Myth 2: Big Data Is Objective
Over 20 million Twitter messages about Hurricane Sandy were posted last year. … “These were very privileged urban stories.” And some people, privileged or otherwise, put information like their home addresses on Twitter in an effort to seek aid. That sensitive information is still out there, even though the threat is gone.
Myth 3: Big Data Doesn’t Discriminate
“Big Data is neither color blind nor gender blind,” Ms. Crawford said. “We can see how it is used in marketing to segment people.” …
Myth 4: Big Data Makes Cities Smart
…, moving cities toward digital initiatives like predictive policing, or creating systems where people are seen, whether they like it or not, can promote lots of tension between individuals and their governments.
Myth 5: Big Data Is Anonymous
A study published in Nature last March looked at 1.5 million phone records that had personally identifying information removed. It found that just four data points of when and where a call was made could identify 95 percent of individuals. …
Myth 6: You Can Opt Out
… given the ways that information can be obtained in these big systems, “what are the chances that your personal information will never be used?”
Before Big Data disappears into the background as another fact of life, Ms. Crawford said, “We need to think about how we will navigate these systems. Not just individually, but as a society.”
Life and Death of Tweets Not so Random After All
MIT Technology Review: “MIT assistant professor Tauhid Zaman and two other researchers (Emily Fox at the University of Washington and Eric Bradlow at the University of Pennsylvania’s Wharton School) have come up with a model that can predict how many times a tweet will ultimately be retweeted, minutes after it is posted. The model was created by collecting retweets on a slew of topics and looking at the time when the original tweet was posted and how fast it spread. That provided knowledge used to predict how popular a new tweet will be by looking at how many times it was retweeted shortly after it was first posted.
The researchers’ findings were explained in a paper submitted to the Annals of Applied Statistics. In the paper, the authors note that “understanding retweet behavior could lead to a better understanding of how broader ideas spread in Twitter and in other social networks,” and such data may be helpful in a number of areas, like marketing and political campaigning.
You can check out the model here.”