Nonsectarian Welfare Statements

New Paper by Cass Sunstein: “How can we measure whether national institutions in general, and regulatory institutions in particular, are dysfunctional? A central question is whether they are helping a nation’s citizens to live good lives. A full answer to that question would require a great deal of philosophical work, but it should be possible to achieve an incompletely theorized agreement on a kind of nonsectarian welfarism, emphasizing the importance of five variables: subjective well-being, longevity, health, educational attainment, and per capita income. In principle, it would be valuable to identify the effects of new initiatives (including regulations) on all of these variables. In practice, it is not feasible to do so; assessments of subjective well-being present particular challenges. In their ideal form, Regulatory Impact Statements should be seen as Nonsectarian Welfare Statements, seeking to identify the consequences of regulatory initiatives for various components of welfare. So understood, they provide reasonable measures of regulatory success or failure, and hence a plausible test of dysfunction. There is a pressing need for improved evaluations, including both randomized controlled trials and ex post assessments.”

The Other Side of Open is Not Closed

Dazza Greenwood at “Impliedly, the opposite of “open” is “closed” but the other side of open data, open API’s and open access is usually still about enabling access but only when allowed or required. Open government also needs to include adequate methods to access and work with data and other resources that are not fully open. In fact, many (most?) high value, mission critical and societally important data access is restricted in some way. If a data-set is not fully public record then a good practice is to think of it as “protected” and to ensure access according to proper controls.
As a metaphorical illustration, you could look at an open data system like a village square or agora that is architected and intended to be broadly accessible. On the other side of the spectrum, you could see a protected data system more like a castle or garrison, that is architected to be secure from intruders but features guarded gates and controlled access points in order to function.
In fact, this same conceptual approach applies well beyond data and includes everything you could consider an resource on the Internet.  In other words, any asset, service, process or other item that can exist at a URL (or URI) is a resource and can be positioned somewhere on a spectrum from openly accessible to access protected. It is easy to forget that the “R” in URL stands for “Resource” and the whole wonderful web connects to resources of every nature and description. Data – structured, raw or otherwise – is just the tip of the iceberg.
Resources on the web could be apps and other software, or large-scale enterprise network services, or just a single text file with few lines of html. The concept of a enabling access permission to “protected resources” on the web is the cornerstone of OAuth2 and is now being extended by the OpenID Connect standard, the User Managed Access protocol and other specifications to enable a powerful array of REST-based authorization possibilities…”

Citizen science versus NIMBY?

Ethan Zuckerman’s latest blog: “Safecast is a remarkable project born out of a desire to understand the health and safety implications of the release of radiation from the Fukushima Daiichi nuclear power plant in the wake of the March 11, 2011 earthquake and tsunami. Unsatisfied with limited and questionable information about radiation released by the Japanese government, Joi Ito, Peter, Sean and others worked to design, build and deploy GPS-enabled geiger counters which could be used by concerned citizens throughout Japan to monitor alpha, beta and gamma radiation and understand what parts of Japan have been most effected by the Fukushima disaster.

Screen Shot 2013-08-29 at 10.25.44 AM
The Safecast project has produced an elegant map that shows how complicated the Fukushima disaster will be for the Japanese government to recover from. While there are predictably elevated levels of radiation immediately around the Fukushima plant and in the 18 mile exclusion zones, there is a “plume” of increased radiation south and west of the reactors. The map is produced from millions of radiation readings collected by volunteers, who generally take readings while driving – Safecast’s bGeigie meter automatically takes readings every few seconds and stores them along with associated GPS coordinates for later upload to the server.
This long and thoughtful blog post about the progress of government decontamination efforts, the cost-benefit of those efforts, and the government’s transparency or opacity around cleanup gives a sense for what Safecast is trying to do: provide ways for citizens to check and verify government efforts and understand the complexity of decisions about radiation exposure. This is especially important in Japan, as there’s been widespread frustration over the failures of TEPCO to make progress on cleaning up the reactor site, leading to anger and suspicion about the larger cleanup process.
For me, Safecast raises two interesting questions:
– If you’re not getting trustworthy or sufficient information from your government, can you use crowdsourcing, citizen science or other techniques to generate that data?
– How does collecting data relate to civic engagement? Is it a path towards increased participation as an engaged and effective citizen?
To have some time to reflect on these questions, I decided I wanted to try some of my own radiation monitoring. I borrowed Joi Ito’s bGeigie and set off for my local Spent Nuclear Fuel and Greater-Than-Class C Low Level Radioactive Waste dry cask storage facility…

Projects like Safecast – and the projects I’m exploring this coming year under the heading of citizen infrastructure monitoring – have a challenge. Most participants aren’t going to uncover Ed Snowden-calibre information by driving around with a geiger counter or mapping wells in their communities. Lots of data collected is going to reveal that governments and corporations are doing their jobs, as my data suggests. It’s easy to track a path between collecting groundbreaking data and getting involved with deeper civic and political issues – will collecting data that the local nuclear plant is apparently safe get me more involved with issues of nuclear waste disposal?
It just might. One of the great potentials of citizen science and citizen infrastructure monitoring is the possibility of reducing the exotic to the routine….”

Creating Networked Cities

New Report by Alissa Black and Rachel Burstein, New America Foundation: “In April 2013 the California Civic Innovation Project released a report, The Case for Strengthening Personal Networks in California Local Governments, highlighting the important role of knowledge sharing in the diffusion of innovations from one city or county to another, and identifying personal connections as a significant source of information when it comes to learning about and implementing innovations.
Based on findings from CCIP’s previous study, Creating Networked Cities makes recommendations on how local government leaders, professional associations, and foundation professionals might promote and improve knowledge sharing through developing, strengthening and leveraging their networks. Strong local government networks support the continual sharing and advancement of projects, emerging practices, and civic innovation…Download CCIP’s recommendations for strengthening local government networks and diffusing innovation here.”

Assessing Zuckerberg’s Idea That Facebook Could Help Citizens Re-Make Their Government

Gregory Ferenstein in TechCrunch: “Mark Zuckerberg has a grand vision that Facebook will help citizens in developing countries decide their own governments. It’s a lofty and partially attainable goal. While Egypt probably won’t let citizens vote for their next president with a Like, it is theoretically possible to use Facebook to crowdsource expertise. Governments around the world are experimenting with radical online direct democracy, but it doesn’t always work out.

Very briefly, Zuckerberg laid out his broad vision for e-government to Wired’s Steven Levy, while defending, a new consortium to bring broadband to the developing world.

“People often talk about how big a change social media had been for our culture here in the U.S. But imagine how much bigger a change it will be when a developing country comes online for the first time ever. We use things like Facebook to share news and keep in touch with our friends, but in those countries, they’ll use this for deciding what kind of government they want to have. Getting access to health care information for the first time ever.”

When he references “deciding … government,” Zuckerberg could be talking about voting, sharing ideas, or crafting a constitution. We decided to assess the possibilities of them all….
For citizens in the exciting/terrifying position to construct a brand-new government, American-style democracy is one of many options. Britain, for instance, has a parliamentary system and has no constitution. In other cases, a government may want to heed political scientists’ advice and develop a “consensus democracy,” where more than two political parties are incentivized to work collaboratively with citizens, business, and different branches of government to craft laws.
At least once, choosing a new style of democracy has been attempted through the Internet. After the global financial meltdown wrecked Iceland’s economy, the happy citizens of the grass-covered country decided to redo their government and solicit suggestions from the public (950 Icelanders chosen by lottery and general calls for ideas through social networks). After much press about Iceland’s “crowdsourced” constitution, it crashed miserably after most of the elected leaders rejected it.
Crafting law, especially a constitution, is legally complex; unless there is a systematic way to translate haphazard citizen suggestions into legalese, the results are disastrous.
“Collaborative drafting, at large scale, at low costs, and that is inclusive, is something that we still don’t know how to do,” says Tiago Peixoto, a World Bank Consultant on participatory democracy (and one of our Most Innovative People In Democracy).
Peixoto, who helps the Brazilian government conduct some of the world’s only online policymaking, says he’s optimistic that Facebook could be helpful, but he wouldn’t use it to draft laws just yet.
While technically it is possible for social networks to craft a new government, we just don’t know how to do it very well, and, therefore, leaders are likely to reject the idea. In other words, don’t expect Egypt to decide their future through Facebook likes.”

The Three Worlds of Governance: Arguments for a Parsimonious Theory of Quality of Government.

New Working Paper by Bo Rothstein for the Quality of Governance Institute: “It is necessary to conceptualize and provide better measures of good governance because in contrast to democratization, empirical studies show that it has strong positive effects on measures of human well-being, social trust, life satisfaction, peace and political legitimacy. A central problem is that the term “governance” is conceptualized differently in three main approaches to governance which has led to much confusion. To avoid this, the term quality of government (QoG) is preferred.
This paper argues for a parsimonious conceptualization of QoG built the “Rawls-Machiavelli pro-gramme”. This is a combination of the Rawlsian understanding of what should be seen as a just political order and the empirical strategy used by Machiavelli stating what is possible to implement. It is argued that complex definitions are impossible to operationalize and that such a strategy would leave political science without a proper conceptualization as well as measures of the part of the state that is most important for humans’ well-being and political legitimacy. The theory proposed is that impartiality in the exercise of public power should be the basic norm for how QoG should be defined. The advantage with this strategy is that it does not include in the definition of QoG what we want to explain (efficiency, prosperity, administrative capacity and other “good outcomes”) and that recent empirical research shows that this theory can be operationalized and used to measure QoG in ways that have the predicted outcomes.”

Employing digital crowdsourced information resources: Managing the emerging information commons

New Paper by Robin Mansell in the International Journal of the Commons: “This paper examines the ways loosely connected online groups and formal science professionals are responding to the potential for collaboration using digital technology platforms and crowdsourcing as a means of generating data in the digital information commons. The preferred approaches of each of these groups to managing information production, circulation and application are examined in the light of the increasingly vast amounts of data that are being generated by participants in the commons. Crowdsourcing projects initiated by both groups in the fields of astronomy, environmental science and crisis and emergency response are used to illustrate some of barriers and opportunities for greater collaboration in the management of data sets initially generated for quite different purposes. The paper responds to claims in the literature about the incommensurability of emerging approaches to open information management as practiced by formal science and many loosely connected online groups, especially with respect to authority and the curation of data. Yet, in the wake of technological innovation and diverse applications of crowdsourced data, there are numerous opportunities for collaboration. This paper draws on examples employing different social technologies of authority to generate and manage data in the commons. It suggests several measures that could provide incentives for greater collaboration in the future. It also emphasises the need for a research agenda to examine whether and how changes in social technologies might foster collaboration in the interests of reaping the benefits of increasingly large data resources for both shorter term analysis and longer term accumulation of useful knowledge.”

Mapping the Twitterverse

Mapping the Twitterverse “What does your Twitter profile reveal about you? More than you know, according to Chris Weidemann. The GIST master’s student has developed an application that follows geospatial footprints.
You start your day at your favorite breakfast spot. When your order of strawberry waffles with extra whipped cream arrives, it’s too delectable not to share with your Twitter followers. You snap a photo with your smartphone and hit send. Then, it’s time to hit the books.
You tweet your friends that you’ll be at the library on campus. Later that day, palm trees silhouette a neon-pink sunset. You can’t resist. You tweet a picture with the hashtag #ILoveLA.
You may not realize that when you tweet those breezy updates and photos of food, you are sharing information about your location.
Chris Weidemann, a graduate student in the Geographic Information Science and Technology (GIST) online master’s program at USC Dornsife, investigated just how much public was generated by Twitter users and how their information—available through Twitter’s (API)—could potentially be used by third parties. His study was published June 2013 in the International Journal of Geoinformatics
Twitter has approximately 500 million active users, and reports show that 6 percent of users opt-in to allow the platform to broadcast their location using global positioning technology with each tweet they post. That’s about 30 million people sending geo-tagged data out into the Twitterverse. In their tweets, people can choose whether their information is displayed as a city and state, an address or pinpoint their precise latitude and longitude.
That’s only part of their geospatial footprint. Information contained in a post may reveal a user’s location. Depending upon how the account is set up, profiles may include details about their hometown, time zone and language.”

Linux Foundation Collaboration Gets Biological

eWeek: “The Linux Foundation is growing its roster of collaboration projects by expanding from the physical into the biological realm with the OpenBEL (Biological Expression Language). The Linux Foundation, best known as the organization that helps bring Linux vendors and developers together, is also growing its expertise as a facilitator for collaborative development projects…
OpenBEL got its start in June 2012 after being open-sourced by biotech firm Selventa. The effort now includes the participation of Foundation Medicine, AstraZeneca,The Fraunhofer Institute, Harvard Medical School, Novartis, Pfizer and the University of California at San Diego.
BEL offers researchers a language to clearly express scientific findings from the life sciences in a format that can be understood by computing infrastructure…..
The Linux Foundation currently hosts a number of different collaboration projects, including the Xen virtualization project, the OpenDaylight software-defined networking effort, Tizen for mobile phone development, and OpenMAMA for financial services information, among others.
The OpenBEL project will be similar to existing collaboration projects in that the contributors to the project want to accelerate their work through collaborative development, McPherson explained.”

Government Is a Good Venture Capitalist

Wall Street Journal: “In a knowledge-intensive economy, innovation drives growth. But what drives innovation? In the U.S., most conservatives believe that economically significant new ideas originate in the private sector, through either the research-and-development investments of large firms with deep pockets or the inspiration of obsessive inventors haunting shabby garages. In this view, the role of government is to secure the basic conditions for honest and efficient commerce—and then get out of the way. Anything more is bound to be “wasteful” and “burdensome.”
The real story is more complex and surprising. For more than four decades, R&D magazine has recognized the top innovations—100 each year—that have moved past the conceptual stage into commercial production and sales. Economic sociologists Fred Block and Matthew Keller decided to ask a simple question: Where did these award-winning innovations come from?
The data indicated seven kinds of originating entities: Fortune 500 companies; small and medium enterprises (including startups); collaborations among private entities; government laboratories; universities; spinoffs started by researchers at government labs or universities; and a grab bag of other public and nonprofit agencies.
Messrs. Block and Keller randomly selected three years in each of the past four decades and analyzed the resulting 1,200 innovations. About 10% originated in foreign entities; the sociologists focused on the domestic innovations, more than 1,050.
Two of their findings stand out. First, the number of award winners originating in Fortune 500 companies—either working alone or in collaboration with others—has declined steadily and sharply, from an annual average of 44 in the 1970s to only nine in the first decade of this century.
Second, the number of top innovations originating in federal laboratories, universities or firms formed by former researchers in those entities rose dramatically, from 18 in the 1970s to 37 in the 1980s and 55 in the 1990s before falling slightly to 49 in the 2000s. Without the research conducted in federal labs and universities (much of it federally funded), commercial innovation would have been far less robust…”