NSF-funded Research on Elections and Cooperation


John Sides @ the Monkey Cage: “Much of politics is about collective action, whereby groups of people need to cooperate in order to produce an outcome.  One of the biggest challenges is getting people to cooperate in providing a public good, which by its nature can be shared by everyone regardless of whether they’ve cooperated in the first place.
One way to enforce cooperation is via some central authority that’s external to the group (like a government).  Another way, prominent in Elinor Ostrom’s work, is via internal policing by peers within the group.
In this NSF-funded study by Guy Grossman and Delia Baldassarri show that a third way can work as well: developing a leadership or authority structure within the group itself.  More importantly, they show that the success of such an authority depends on politics itself.  Leaders need to be elected to induce cooperation.The study was conducted among Ugandans who are members of farmer organizations and experience directly the challenges of cooperating to produce public goods.  Grossman and Baldassarri not only examined how these people behaved when asked to play a simple “public goods game” in a quasi-laboratory setting, but how they actually behaved within their farmer organization in real life.  In both contexts, members cooperated significantly more when leaders were democratically elected—as was true in one experimental condition of the public goods game—or when they perceived the leadership of their farmer organization as more legitimate.
For more in this week’s presentation of NSF-funded research recently published in the American Journal of Political Science, see here, here, and here.]”

Just the Facts (on the value of IT to society)


New publication from the the Information Technology and Innovation Foundation (ITIF):

“A prominent economist once stated, “computer chips, potato chips, what’s the difference.” The short answer is “a lot.” Fifty-five years after the invention of the integrated circuit and 28 years after the first dot-com website was registered, information and communications technology (IT) remains a central driver of innovation and prosperity.

This fact sheet lists 53 documented economic benefits of IT, from jobs and output to competitiveness and innovation. Read the Fact Sheet.”

What the Obama Campaign's Chief Data Scientist Is Up to Now


Alexis Madrigal in The Atlantic: “By all accounts, Rayid Ghani’s data work for President Obama’s reelection campaign was brilliant and unprecedented. Ghani probably could have written a ticket to work at any company in the world, or simply collected speaking fees for a few years telling companies how to harness the power of data like the campaign did.
But instead, Ghani headed to the University of Chicago to bring sophisticated data analysis to difficult social problems. Working with Computation Institute and the Harris School of Public Policy, Ghani will serve as the chief data scientist for the Urban Center for Computation and Data.”

Feel the force


The Economist: “Three new books look at power in the digital age…
To Save Everything, Click Here: The Folly of Technological Solutionism. By Evgeny Morozov. PublicAffairs; 415 pages; $28.99. Allen Lane; £20.
Who Owns the Future? By Jaron Lanier. Simon and Schuster; 397 pages; $28. Allen Lane; £20.
The New Digital Age: Reshaping the Future of People, Nations and Business. By Eric Schmidt and Jared Cohen. Knopf; 319 pages; $26.95. John Murray; £25.

What the Internet is Doing to Our Brains


Epipheo.TV: “Most of us are on the Internet on a daily basis and whether we like it or not, the Internet is affecting us. It changes how we think, how we work, and it even changes our brains.  We interviewed Nicholas Carr, the author of, “The Shallows: What the Internet is Doing to Our Brains,” about how the Internet is influencing us, our creativity, our thought processes, our ideas, and how we think.”

UK: The nudge unit – has it worked so far?


The Guardian: “Since 2010 David Cameron’s pet project has been tasked with finding ways to improve society’s behaviour – and now the ‘nudge unit’ is going into business by itself. But have its initiatives really worked?….
The idea behind the unit is simpler than you might believe. People don’t always act in their own interests – by filing their taxes late, for instance, overeating, or not paying fines until the bailiffs call. As a result, they don’t just harm themselves, they cost the state a lot of money. By looking closely at how they make their choices and then testing small changes in the way the choices are presented, the unit tries to nudge people into leading better lives, and save the rest of us a fortune. It is politics done like science, effectively – with Ben Goldacre’s approval – and, in many cases, it appears to work….”

See also: Jobseekers’ psychometric test ‘is a failure’ (US institute that devised questionnaire tells ‘nudge’ unit to stop using it as it failed to be scientifically validated)

Innovation in Gov Service Delivery


basicDeveloperForce: “Can Government embody innovation and deliver ongoing increased levels of service? Salesforce.com’s Vivek Kundra and companies like BasicGov, Cloud Safety Net & LaunchPad believe so.
Entrepreneurs work tirelessly to help private sector companies streamline all aspects of their business from operations to customer engagement. Their goal and motto is to challenge the status quo and maximize customer satisfaction. Until recently, that mantra wasn’t exactly echoing through the hallways of most government agencies….
Public Sector transformation is being driven by increased data transparency and the formation of government-segmented ecosystems. In a January WSJ, CIO Journal article titled Vivek Kundra: Release Data, Even If It’s Imperfect, Vivek explains this concept and its role in creating efficiencies within government. Vivek says, “the release of government data is helping the private sector create a new wave of innovative apps, like applications that will help patients choose better hospitals. Those apps are built atop anonymized Medicare information.”
Some areas of government are even going so far as to create shared services. When you look at how governments are structured many processes are repeated, and in the past solutions were created or purchased for each unique instance. Various agencies have even gone so far as to create apps themselves and share these solutions without the benefit of leveraging best practices or creating scalable frameworks. Without subject-matter expertise government is falling behind in the business of building and maintaining world class applications….
ISV’s can leverage their private sector expertise and apply that to any number of functions and achieve dramatic results. Many of those partners are focused specifically on leveraging the Salesforce.com Platform.
One great example of an ISV leading that charge is BasicGov. BasicGov’s mission is to help state and local governments provide better services to its citizens. They accomplish this by offering a suite of modules that streamlines and automates processes in community development to achieve smart growth and sustainability goals. My personal favorite is the Citizen Portal where one can “view status of applications, complaints, communications online”….
AppExchange for Government is an online storefront offering apps specifically geared for federal, state & local governments.”

Linking open data to augmented intelligence and the economy


Open Data Institute and Professor Nigel Shadbolt (@Nigel_Shadbolt) interviewed by by (@digiphile):  “…there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?”
there are some clear learnings. One that I’ve been banging on about recently has been that yes, it really does matter to turn the dial so that governments have a presumption to publish non-personal public data. If you would publish it anyway, under a Freedom of Information request or whatever your local legislative equivalent is, why aren’t you publishing it anyway as open data? That, as a behavioral change. is a big one for many administrations where either the existing workflow or culture is, “Okay, we collect it. We sit on it. We do some analysis on it, and we might give it away piecemeal if people ask for it.” We should construct publication process from the outset to presume to publish openly. That’s still something that we are two or three years away from, working hard with the public sector to work out how to do and how to do properly.
We’ve also learned that in many jurisdictions, the amount of [open data] expertise within administrations and within departments is slight. There just isn’t really the skillset, in many cases. for people to know what it is to publish using technology platforms. So there’s a capability-building piece, too.
One of the most important things is it’s not enough to just put lots and lots of datasets out there. It would be great if the “presumption to publish” meant they were all out there anyway — but when you haven’t got any datasets out there and you’re thinking about where to start, the tough question is to say, “How can I publish data that matters to people?”
The data that matters is revealed in the fact that if we look at the download stats on these various UK, US and other [open data] sites. There’s a very, very distinctive parallel curve. Some datasets are very, very heavily utilized. You suspect they have high utility to many, many people. Many of the others, if they can be found at all, aren’t being used particularly much. That’s not to say that, under that long tail, there isn’t large amounts of use. A particularly arcane open dataset may have exquisite use to a small number of people.
The real truth is that it’s easy to republish your national statistics. It’s much harder to do a serious job on publishing your spending data in detail, publishing police and crime data, publishing educational data, publishing actual overall health performance indicators. These are tough datasets to release. As people are fond of saying, it holds politicians’ feet to the fire. It’s easy to build a site that’s full of stuff — but does the stuff actually matter? And does it have any economic utility?

The Next Great Internet Disruption: Authority and Governance


An essay by David Bollier and John Clippinger as part of their ongoing work of ID3, the Institute for Data-Driven Design :As the Internet and digital technologies have proliferated over the past twenty years, incumbent enterprises nearly always resist open network dynamics with fierce determination, a narrow ingenuity and resistance….But the inevitable rearguard actions to defend old forms are invariably overwhelmed by the new, network-based ones.  The old business models, organizational structures, professional sinecures, cultural norms, etc., ultimately yield to open platforms.
When we look back on the past twenty years of Internet history, we can more fully appreciate the prescience of David P. Reed’s seminal 1999 paper on “Group Forming Networks” (GFNs). “Reed’s Law” posits that value in networks increases exponentially as interactions move from a broadcasting model that offers “best content” (in which value is described by n, the number of consumers) to a network of peer-to-peer transactions (where the network’s value is based on “most members” and mathematically described by n2).  But by far the most valuable networks are based on those that facilitate group affiliations, Reed concluded.  When users have tools for “free and responsible association for common purposes,” he found, the value of the network soars exponentially to 2– a fantastically large number.   This is the Group Forming Network.  Reed predicted that “the dominant value in a typical network tends to shift from one category to another as the scale of the network increases.…”
What is really interesting about Reed’s analysis is that today’s world of GFNs, as embodied by Facebook, Twitter, Wikipedia and other Web 2.0 technologies, remains highly rudimentary.  It is based on proprietary platforms (as opposed to open source, user-controlled platforms), and therefore provides only limited tools for members of groups to develop trust and confidence in each other.  This suggests a huge, unmet opportunity to actualize greater value from open networks.  Citing Francis Fukuyama’ book Trust, Reed points out that “there is a strong correlation between the prosperity of national economies and social capital, which [Fukuyama] defines culturally as the ease with which people in a particular culture can form new associations.”

The Value of Open Data – Don’t Measure Growth, Measure Destruction


David Eaves: “…And that is my main point. The real impact of open data will likely not be in the economic wealth it generates, but rather in its destructive power. I think the real impact of open data is going to be in the value it destroys and so in the capital it frees up to do other things. Much like Red Hat is fraction of the size of Microsoft, Open Data is going to enable new players to disrupt established data players.

What do I mean by this?
Take SeeClickFix. Here is a company that, leveraging the Open311 standard, is able to provide many cities with a 311 solution that works pretty much out of the box. 20 years ago, this was a $10 million+ problem for a major city to solve, and wasn’t even something a small city could consider adopting – it was just prohibitively expensive. Today, SeeClickFix takes what was a 7 or 8 digit problem, and makes it a 5 or 6 digit problem. Indeed, I suspect SeeClickFix almost works better in a small to mid-sized government that doesn’t have complex work order software and so can just use SeeClickFix as a general solution. For this part of the market, it has crushed the cost out of implementing a solution.
Another example. And one I’m most excited. Look at CKAN and Socrata. Most people believe these are open data portal solutions. That is a mistake. These are data management companies that happen to have simply made “sharing (or “open”) a core design feature. You know who does data management? SAP. What Socrata and CKAN offer is a way to store, access, share and engage with data previously gathered and held by companies like SAP at a fraction of the cost. A SAP implementation is a 7 or 8 (or god forbid, 9) digit problem. And many city IT managers complain that doing anything with data stored in SAP takes time and it takes money. CKAN and Socrata may have only a fraction of the features, but they are dead simple to use, and make it dead simple to extract and share data. More importantly they make these costly 7 and 8 digital problems potentially become cheap 5 or 6 digit problems.
On the analysis side, again, I do hope there will be big wins – but what I really think open data is going to do is lower the costs of creating lots of small wins – crazy numbers of tiny efficiencies….
Don’t look for the big bang, and don’t measure the growth in spending or new jobs. Rather let’s try to measure the destruction and cumulative impact of a thousand tiny wins. Cause that is where I think we’ll see it most.”