The Three Worlds of Governance: Arguments for a Parsimonious Theory of Quality of Government.


New Working Paper by Bo Rothstein for the Quality of Governance Institute: “It is necessary to conceptualize and provide better measures of good governance because in contrast to democratization, empirical studies show that it has strong positive effects on measures of human well-being, social trust, life satisfaction, peace and political legitimacy. A central problem is that the term “governance” is conceptualized differently in three main approaches to governance which has led to much confusion. To avoid this, the term quality of government (QoG) is preferred.
This paper argues for a parsimonious conceptualization of QoG built the “Rawls-Machiavelli pro-gramme”. This is a combination of the Rawlsian understanding of what should be seen as a just political order and the empirical strategy used by Machiavelli stating what is possible to implement. It is argued that complex definitions are impossible to operationalize and that such a strategy would leave political science without a proper conceptualization as well as measures of the part of the state that is most important for humans’ well-being and political legitimacy. The theory proposed is that impartiality in the exercise of public power should be the basic norm for how QoG should be defined. The advantage with this strategy is that it does not include in the definition of QoG what we want to explain (efficiency, prosperity, administrative capacity and other “good outcomes”) and that recent empirical research shows that this theory can be operationalized and used to measure QoG in ways that have the predicted outcomes.”

Employing digital crowdsourced information resources: Managing the emerging information commons


New Paper by Robin Mansell in the International Journal of the Commons: “This paper examines the ways loosely connected online groups and formal science professionals are responding to the potential for collaboration using digital technology platforms and crowdsourcing as a means of generating data in the digital information commons. The preferred approaches of each of these groups to managing information production, circulation and application are examined in the light of the increasingly vast amounts of data that are being generated by participants in the commons. Crowdsourcing projects initiated by both groups in the fields of astronomy, environmental science and crisis and emergency response are used to illustrate some of barriers and opportunities for greater collaboration in the management of data sets initially generated for quite different purposes. The paper responds to claims in the literature about the incommensurability of emerging approaches to open information management as practiced by formal science and many loosely connected online groups, especially with respect to authority and the curation of data. Yet, in the wake of technological innovation and diverse applications of crowdsourced data, there are numerous opportunities for collaboration. This paper draws on examples employing different social technologies of authority to generate and manage data in the commons. It suggests several measures that could provide incentives for greater collaboration in the future. It also emphasises the need for a research agenda to examine whether and how changes in social technologies might foster collaboration in the interests of reaping the benefits of increasingly large data resources for both shorter term analysis and longer term accumulation of useful knowledge.”

Mapping the Twitterverse


Mapping the Twitterverse

Phys.org: “What does your Twitter profile reveal about you? More than you know, according to Chris Weidemann. The GIST master’s student has developed an application that follows geospatial footprints.
You start your day at your favorite breakfast spot. When your order of strawberry waffles with extra whipped cream arrives, it’s too delectable not to share with your Twitter followers. You snap a photo with your smartphone and hit send. Then, it’s time to hit the books.
You tweet your friends that you’ll be at the library on campus. Later that day, palm trees silhouette a neon-pink sunset. You can’t resist. You tweet a picture with the hashtag #ILoveLA.
You may not realize that when you tweet those breezy updates and photos of food, you are sharing information about your location.
Chris Weidemann, a graduate student in the Geographic Information Science and Technology (GIST) online master’s program at USC Dornsife, investigated just how much public was generated by Twitter users and how their information—available through Twitter’s (API)—could potentially be used by third parties. His study was published June 2013 in the International Journal of Geoinformatics
Twitter has approximately 500 million active users, and reports show that 6 percent of users opt-in to allow the platform to broadcast their location using global positioning technology with each tweet they post. That’s about 30 million people sending geo-tagged data out into the Twitterverse. In their tweets, people can choose whether their information is displayed as a city and state, an address or pinpoint their precise latitude and longitude.
That’s only part of their geospatial footprint. Information contained in a post may reveal a user’s location. Depending upon how the account is set up, profiles may include details about their hometown, time zone and language.”
 

Public Policies, Made to Fit People


Richard Thaler in the New York Times: “I HAVE written here before about the potential gains to government from involving social and behavioral scientists in designing public policies. My enthusiasm comes in part from my experiences as an academic adviser to the Behavioral Insights Team created in Britain by Prime Minister David Cameron.

Thus I was pleased to hear reports that the White House is building a similar initiative here in the United States. Maya Shankar, a cognitive scientist and senior policy adviser at the White House Office of Science and Technology Policy, is coordinating this cross-agency group, called the Social and Behavioral Science Team; it is part of a larger effort to use evidence and innovation to promote government performance and efficiency. I am among a number of academics who have shared ideas with the administration about how research findings in social and behavioral science can improve policy.

It makes sense for social scientists to become more involved in policy, because many of society’s most challenging problems are, in essence, behavioral. Using social scientists’ findings to create plausible interventions, then testing their efficacy with randomized controlled trials, can improve — and sometimes save — people’s lives, all while reducing the need for more government spending to fix problems later.

Here are three examples of social science issues that have attracted the team’s attention…
THE 30-MILLION-WORD GAP One of society’s thorniest problems is that children from poor families start school lagging badly behind their more affluent classmates in readiness. By the age of 3, children from affluent families have vocabularies that are roughly double those of children from poor families, according to research published in 1995….
DOMESTIC VIOLENCE The team will primarily lend support and expertise to federal agency initiatives. One example concerns the effort to reduce domestic violence, a problem for which there is no quick fix….
HEALTH COMPLIANCE One reason for high health care costs is that patients fail to follow their treatment regimen….”

Inside Noisebridge: San Francisco’s eclectic anarchist hackerspace


at Gigaom: “Since its formation in 2007, Noisebridge has grown from a few people meeting in coffee shops to an overflowing space on Mission Street where members can pursue projects that even the maddest scientist would approve of…. When Noisebridge opened the doors of its first hackerspace location in San Francisco’s Mission district in 2008, it had nothing but a large table and few chairs found on the street.
Today, it looks like a mad scientist has been methodically hoarding tools, inventions, art, supplies and a little bit of everything else for five years. The 350 people who come through Noisebridge each week have a habit of leaving a mark, whether by donating a tool or building something that other visitors add to bit by bit. Anyone can be a paid member or a free user of the space, and over the years they have built it into a place where you can code, sew, hack hardware, cook, build robots, woodwork, learn, teach and more.
The members really are mad scientists. Anything left out in the communal spaces is fair game to “hack into a giant robot,” according to co-founder Mitch Altman. Members once took a broken down wheelchair and turned it into a brainwave-controlled robot named M.C. Hawking. Another person made pants with a built-in keyboard. The Spacebridge group has sent high altitude balloons to near space, where they captured gorgeous videos of the Earth. And once a month, the Vegan Hackers teach their pupils how to make classic fare like sushi and dumplings out of vegan ingredients….”

Index: The Data Universe


The Living Library Index – inspired by the Harper’s Index – provides important statistics and highlights global trends in governance innovation. This installment focuses on the data universe and was originally published in 2013.

  • How much data exists in the digital universe as of 2012: 2.7 zetabytes*
  • Increase in the quantity of Internet data from 2005 to 2012: +1,696%
  • Percent of the world’s data created in the last two years: 90
  • Number of exabytes (=1 billion gigabytes) created every day in 2012: 2.5; that number doubles every month
  • Percent of the digital universe in 2005 created by the U.S. and western Europe vs. emerging markets: 48 vs. 20
  • Percent of the digital universe in 2012 created by emerging markets: 36
  • Percent of the digital universe in 2020 predicted to be created by China alone: 21
  • How much information in the digital universe is created and consumed by consumers (video, social media, photos, etc.) in 2012: 68%
  • Percent of which enterprises have liability or responsibility for (copyright, privacy, compliance with regulations, etc.): 80
  • Amount included in the Obama Administration’s 2-12 Big Data initiative: over $200 million
  • Amount the Department of Defense is investing annually on Big Data projects as of 2012: over $250 million
  • Data created per day in 2012: 2.5 quintillion bytes
  • How many terabytes* of data collected by the U.S. Library of Congress as of April 2011: 235
  • How many terabytes of data collected by Walmart per hour as of 2012: 2,560, or 2.5 petabytes*
  • Projected growth in global data generated per year, as of 2011: 40%
  • Number of IT jobs created globally by 2015 to support big data: 4.4 million (1.9 million in the U.S.)
  • Potential shortage of data scientists in the U.S. alone predicted for 2018: 140,000-190,000, in addition to 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions
  • Time needed to sequence the complete human genome (analyzing 3 billion base pairs) in 2003: ten years
  • Time needed in 2013: one week
  • The world’s annual effective capacity to exchange information through telecommunication networks in 1986, 2007, and (predicted) 2013: 281 petabytes, 65 exabytes, 667 exabytes
  • Projected amount of digital information created annually that will either live in or pass through the cloud: 1/3
  • Increase in data collection volume year-over-year in 2012: 400%
  • Increase in number of individual data collectors from 2011 to 2012: nearly double (over 300 data collection parties in 2012)

*1 zetabyte = 1 billion terabytes | 1 petabyte = 1,000 terabytes | 1 terabyte = 1,000 gigabytes | 1 gigabyte = 1 billion bytes

Sources

Is Online Transparency Just a Feel-Good Sham?


Billy House in the National Journal: “It drew more than a few laughs in Washington. Not long after the White House launched its We the People website in 2011, where citizens could write online petitions and get a response if they garnered enough signatures, someone called for construction of a Star Wars-style Death Star.
With laudable humor, the White House dispatched Paul Shawcross, chief of the Science and Space Branch of the Office of Management and Budget, to explain that the administration “does not support blowing up planets.”
The incident caused a few chuckles, but it also made a more serious point: Years after politicians and government officials began using Internet surveys and online outreach as tools to engage people, the results overall have been questionable….
But skepticism over the value of these programs—and their genuineness—remains strong. Peter Levine, a professor at Tufts University’s Jonathan M. Tisch College of Citizenship and Public Service, said programs like online petitioning and citizen cosponsoring do not necessarily produce a real, representative voice for the people.
It can be “pretty easy to overwhelm these efforts with deliberate strategic action,” he said, noting that similar petitioning efforts in the European Union often find marijuana legalization as the most popular measure.”

Civic Innovation Fellowships Go Global


Some thoughts from Panthea Lee from Reboot: “In recent years, civic innovation fellowships have shown great promise to improve the relationships between citizens and government. In the United States, Code for America and the Presidential Innovation Fellows have demonstrated the positive impact a small group of technologists can have working hand-in-hand with government. With the launch of Code for All, Code for Europe, Code4Kenya, and Code4Africa, among others, the model is going global.
But despite the increasing popularity of civic innovation fellowships, there are few templates for how a “Code for” program can be adapted to a different context. In the US, the success of Code for America has drawn from a wealth of tech talent eager to volunteer skills, public and private support, and the active participation of municipal governments. Elsewhere, new “Code for” programs are surely going to have to operate within a different set of capacities and constraints.”

White House Expands Guidance on Promoting Open Data


NextGov: “White House officials have announced expanded technical guidance to help agencies make more data accessible to the public in machine-readable formats.
Following up on President Obama’s May executive order linking the pursuit of open data to economic growth, innovation and government efficiency, two budget and science office spokesmen on Friday published a blog post highlighting new instructions and answers to frequently asked questions.
Nick Sinai, deputy chief technology officer at the Office of Science and Technology Policy, and Dominic Sale, supervisory policy analyst at the Office of Management and Budget, noted that the policy now in place means that all “newly generated government data will be required to be made available in open, machine-readable formats, greatly enhancing their accessibility and usefulness, while ensuring privacy and security.”

A collaborative way to get to the heart of 3D printing problems


PSFK: “Because most of us only see the finished product when it comes to 3D printing projects – it’s easy to forget that things can, and do, go wrong when it comes to this miracle technology.
3D printing is constantly evolving, reaching exciting new heights, and touching every industry you can think of – but all this progress has left a trail of mangled plastic, and a devastated machines in it’s wake.
The Art of 3D Print Failure is a Flickr group that aims to document this failure, because after all, mistakes are how we learn, and how we make sure the same thing doesn’t happen the next time around. It can also prevent mistakes from happening to those who are new to 3D printing, before they even make them!”