Peer Production: A Modality of Collective Intelligence


New paper by Yochai Benkler, Aaron Shaw and Benjamin Mako Hill:  “Peer production is the most significant organizational innovation that has emerged from
Internet-mediated social practice and among the most a visible and important examples of collective intelligence. Following Benkler,  we define peer production as a form of open creation and sharing performed by groups online that: (1) sets and executes goals in a decentralized manner; (2) harnesses a diverse range of participant motivations, particularly non-monetary motivations; and (3) separates governance and management relations from exclusive forms of property and relational contracts (i.e., projects are governed as open commons or common property regimes and organizational governance utilizes combinations of participatory, meritocratic and charismatic, rather than proprietary or contractual, models). For early scholars of peer production, the phenomenon was both important and confounding for its ability to generate high quality work products in the absence of formal hierarchies and monetary incentives. However, as peer production has become increasingly established in society, the economy, and scholarship, merely describing the success of some peer production projects has become less useful. In recent years, a second wave of scholarship has emerged to challenge assumptions in earlier work; probe nuances glossed over by earlier framings of the phenomena; and identify the necessary dynamics, structures, and conditions for peer production success.
Peer production includes many of the largest and most important collaborative communities on the Internet….
Much of this academic interest in peer production stemmed from the fact that the phenomena resisted straightforward explanations in terms of extant theories of the organization and production of functional information goods like software or encyclopedias. Participants in peer production projects join and contribute valuable resources without the hierarchical bureaucracies or strong leadership structures common to state agencies or firms, and in the absence of clear financial incentives or rewards. As a result, foundationalresearch on peer production was focused on (1) documenting and explaining the organization and governance of peer production communities, (2) understanding the motivation of contributors to peer production, and (3) establishing and evaluating the quality of peer production’s outputs.
In the rest of this chapter, we describe the development of the academic literature on peer production in these three areas – organization, motivation, and quality.”

Implementing Open Innovation in the Public Sector: The Case of Challenge.gov


Article by Ines Mergel and Kevin C. Desouza in Public Administration Review: “As part of the Open Government Initiative, the Barack Obama administration has called for new forms of collaboration with stakeholders to increase the innovativeness of public service delivery. Federal managers are employing a new policy instrument called Challenge.gov to implement open innovation concepts invented in the private sector to crowdsource solutions from previously untapped problem solvers and to leverage collective intelligence to tackle complex social and technical public management problems. The authors highlight the work conducted by the Office of Citizen Services and Innovative Technologies at the General Services Administration, the administrator of the Challenge.gov platform. Specifically, this Administrative Profile features the work of Tammi Marcoullier, program manager for Challenge.gov, and Karen Trebon, deputy program manager, and their role as change agents who mediate collaborative practices between policy makers and public agencies as they navigate the political and legal environments of their local agencies. The profile provides insights into the implementation process of crowdsourcing solutions for public management problems, as well as lessons learned for designing open innovation processes in the public sector”.

What Government Can and Should Learn From Hacker Culture


in The Atlantic: “Can the open-source model work for federal government? Not in every way—for security purposes, the government’s inner workings will never be completely open to the public. Even in the inner workings of government, fears of triggering the next Wikileaks or Snowden scandal may scare officials away from being more open with one another. While not every area of government can be more open, there are a few areas ripe for change.

Perhaps the most glaring need for an open-source approach is in information sharing. Today, among and within several federal agencies, a culture of reflexive and unnecessary information withholding prevails. This knee-jerk secrecy can backfire with fatal consequences, as seen in the 1998 embassy bombings in Africa, the 9/11 attacks, and the Boston Marathon bombings. What’s most troubling is that decades after the dangers of information-sharing were identified, the problem persists.
What’s preventing reform? The answer starts with the government’s hierarchical structure—though an information-is-power mentality and “need to know” Cold War-era culture contribute too. To improve the practice of information sharing, government needs to change the structure of information sharing. Specifically, it needs to flatten the hierarchy.
Former Obama Administration regulation czar Cass Sunstein’s “nudge” approach shows how this could work. In his book Simpler: The Future of Government, he describes how making even small changes to an environment can affect significant changes in behavior. While Sunstein focuses on regulations, the broader lesson is clear: Change the environment to encourage better behavior and people tend to exhibit better behavior. Without such strict adherence to the many tiers of the hierarchy, those working within it could be nudged towards, rather than fight to, share information.
One example of where this worked is in with the State Department’s annual Religious Engagement Report (RER). In 2011, the office in charge of the RER decided that instead of having every embassy submit their data via email, they would post it on a secure wiki. On the surface, this was a decision to change an information-sharing procedure. But it also changed the information-sharing culture. Instead of sharing information only along the supervisor-subordinate axis, it created a norm of sharing laterally, among colleagues.
Another advantage to flattening information-sharing hierarchies is that it reduces the risk of creating “single points of failure,” to quote technology scholar Beth Noveck. The massive amounts of data now available to us may need massive amounts of eyeballs in order to spot patterns of problems—small pools of supervisors atop the hierarchy cannot be expected to shoulder those burdens alone. And while having the right tech tools to share information is part of the solution—as the wiki made it possible for the RER—it’s not enough. Leadership must also create a culture that nudges their staff to use these tools, even if that means relinquishing a degree of their own power.
Finally, a more open work culture would help connect interested parties across government to let them share the hard work of bringing new ideas to fruition. Government is filled with examples of interesting new projects that stall in their infancy. Creating a large pool of collaborators dedicated to a project increases the likelihood that when one torchbearer burns out, others in the agency will pick up for them.
When Linus Torvalds released Linux, it was considered, in Raymond’s words, “subversive” and “a distinct shock.” Could the federal government withstand such a shock?
Evidence suggests it can—and the transformation is already happening in small ways. One of the winners of the Harvard Kennedy School’s Innovations in Government award is State’s Consular Team India (CTI), which won for joining their embassy and four consular posts—each of which used to have its own distinct set of procedures-into a single, more effective unit who could deliver standardized services. As CTI describes it, “this is no top-down bureaucracy” but shares “a common base of information and shared responsibilities.” They flattened the hierarchy, and not only lived, but thrived.”

Open Data Index provides first major assessment of state of open government data


Press Release from the Open Knowledge Foundation: “In the week of a major international summit on government transparency in London, the Open Knowledge Foundation has published its 2013 Open Data Index, showing that governments are still not providing enough information in an accessible form to their citizens and businesses.
The UK and US top the 2013 Index, which is a result of community-based surveys in 70 countries. They are followed by Denmark, Norway and the Netherlands. Of the countries assessed, Cyprus, St Kitts & Nevis, the British Virgin Islands, Kenya and Burkina Faso ranked lowest. There are many countries where the governments are less open but that were not assessed because of lack of openness or a sufficiently engaged civil society. This includes 30 countries who are members of the Open Government Partnership.
The Index ranks countries based on the availability and accessibility of information in ten key areas, including government spending, election results, transport timetables, and pollution levels, and reveals that whilst some good progress is being made, much remains to be done.
Rufus Pollock, Founder and CEO of the Open Knowledge Foundation said:

Opening up government data drives democracy, accountability and innovation. It enables citizens to know and exercise their rights, and it brings benefits across society: from transport, to education and health. There has been a welcome increase in support for open data from governments in the last few years, but this Index reveals that too much valuable information is still unavailable.

The UK and US are leaders on open government data but even they have room for improvement: the US for example does not provide a single consolidated and open register of corporations, while the UK Electoral Commission lets down the UK’s good overall performance by not allowing open reuse of UK election data.
There is a very disappointing degree of openness of company registers across the board: only 5 out of the 20 leading countries have even basic information available via a truly open licence, and only 10 allow any form of bulk download. This information is critical for range of reasons – including tackling tax evasion and other forms of financial crime and corruption.
Less than half of the key datasets in the top 20 countries are available to re-use as open data, showing that even the leading countries do not fully understand the importance of citizens and businesses being able to legally and technically use, reuse and redistribute data. This enables them to build and share commercial and non-commercial services.
To see the full results: https://index.okfn.org. For graphs of the data: https://index.okfn.org/visualisations.”

Making government simpler is complicated


Mike Konczal in The Washington Post: “Here’s something a politician would never say: “I’m in favor of complex regulations.” But what would the opposite mean? What would it mean to have “simple” regulations?

There are two definitions of “simple” that have come to dominate liberal conversations about government. One is the idea that we should make use of “nudges” in regulation. The other is the idea that we should avoid “kludges.” As it turns out, however, these two definitions conflict with each other —and the battle between them will dominate conversations about the state in the years ahead.

The case for “nudges”

The first definition of a “simple” regulation is one emphasized in Cass Sunstein’s recent book titled Simpler: The Future of Government (also see here). A simple policy is one that simply “nudges” people into one choice or another using a variety of default rules, disclosure requirements, and other market structures. Think, for instance, of rules that require fast-food restaurants to post calories on their menus, or a mortgage that has certain terms clearly marked in disclosures.

These sorts of regulations are deemed “choice preserving.” Consumers are still allowed to buy unhealthy fast-food meals or sign up for mortgages they can’t reasonably afford. The regulations are just there to inform people about their choices. These rules are designed to keep the market “free,” where all possibilities are ultimately possible, although there are rules to encourage certain outcomes.
In his book, however, Sunstein adds that there’s another very different way to understand the term “simple.” What most people mean when they think of simple regulations is a rule that is “simple to follow.” Usually a rule is simple to follow because it outright excludes certain possibilities and thus ensures others. Which means, by definition, it limits certain choices.

The case against “kludges”
This second definition of simple plays a key role in political scientist Steve Teles’ excellent recent essay, “Kludgeocracy in America.” For Teles, a “kludge” is a “clumsy but temporarily effective” fix for a policy problem. (The term comes from computer science.) These kludges tend to pile up over time, making government cumbersome and inefficient overall.
Teles focuses on several ways that kludges are introduced into policy, with a particularly sharp focus on overlapping jurisdictions and the related mess of federal and state overlap in programs. But, without specifically invoking it, he also suggests that a reliance on “nudge” regulations can lead to more kludges.
After all, non-kludge policy proposal is one that will be simple to follow and will clearly cause a certain outcome, with an obvious causality chain. This is in contrast to a web of “nudges” and incentives designed to try and guide certain outcomes.

Why “nudges” aren’t always simpler
The distinction between the two is clear if we take a specific example core to both definitions: retirement security.
For Teles, “one of the often overlooked benefits of the Social Security program… is that recipients automatically have taxes taken out of their paychecks, and, then without much effort on their part, checks begin to appear upon retirement. It’s simple and direct. By contrast, 401(k) retirement accounts… require enormous investments of time, effort, and stress to manage responsibly.”

Yet 401(k)s are the ultimately fantasy laboratory for nudge enthusiasts. A whole cottage industry has grown up around figuring out ways to default people into certain contributions, on designing the architecture of choices of investments, and trying to effortlessly and painlessly guide people into certain savings.
Each approach emphasizes different things. If you want to focus your energy on making people better consumers and market participations, expanding our government’s resources and energy into 401(k)s is a good choice. If you want to focus on providing retirement security directly, expanding Social Security is a better choice.
The first is “simple” in that it doesn’t exclude any possibility but encourages market choices. The second is “simple” in that it is easy to follow, and the result is simple as well: a certain amount of security in old age is provided directly. This second approach understands the government as playing a role in stopping certain outcomes, and providing for the opposite of those outcomes, directly….

Why it’s hard to create “simple” regulations
Like all supposed binaries this is really a continuum. Taxes, for instance, sit somewhere in the middle of the two definitions of “simple.” They tend to preserve the market as it is but raise (or lower) the price of certain goods, influencing choices.
And reforms and regulations are often most effective when there’s a combination of these two types of “simple” rules.
Consider an important new paper, “Regulating Consumer Financial Products: Evidence from Credit Cards,” by Sumit Agarwal, Souphala Chomsisengphet, Neale Mahoney and Johannes Stroebel. The authors analyze the CARD Act of 2009, which regulated credit cards. They found that the nudge-type disclosure rules “increased the number of account holders making the 36-month payment value by 0.5 percentage points.” However, more direct regulations on fees had an even bigger effect, saving U.S. consumers $20.8 billion per year with no notable reduction in credit access…..
The balance between these two approaches of making regulations simple will be front and center as liberals debate the future of government, whether they’re trying to pull back on the “submerged state” or consider the implications for privacy. The debate over the best way for government to be simple is still far from over.”

Google’s flu fail shows the problem with big data


Adam Kucharski in The Conversation: “When people talk about ‘big data’, there is an oft-quoted example: a proposed public health tool called Google Flu Trends. It has become something of a pin-up for the big data movement, but it might not be as effective as many claim.
The idea behind big data is that large amount of information can help us do things which smaller volumes cannot. Google first outlined the Flu Trends approach in a 2008 paper in the journal Nature. Rather than relying on disease surveillance used by the US Centers for Disease Control and Prevention (CDC) – such as visits to doctors and lab tests – the authors suggested it would be possible to predict epidemics through Google searches. When suffering from flu, many Americans will search for information related to their condition….
Between 2003 and 2008, flu epidemics in the US had been strongly seasonal, appearing each winter. However, in 2009, the first cases (as reported by the CDC) started in Easter. Flu Trends had already made its predictions when the CDC data was published, but it turned out that the Google model didn’t match reality. It had substantially underestimated the size of the initial outbreak.
The problem was that Flu Trends could only measure what people search for; it didn’t analyse why they were searching for those words. By removing human input, and letting the raw data do the work, the model had to make its predictions using only search queries from the previous handful of years. Although those 45 terms matched the regular seasonal outbreaks from 2003–8, they didn’t reflect the pandemic that appeared in 2009.
Six months after the pandemic started, Google – who now had the benefit of hindsight – updated their model so that it matched the 2009 CDC data. Despite these changes, the updated version of Flu Trends ran into difficulties again last winter, when it overestimated the size of the influenza epidemic in New York State. The incidents in 2009 and 2012 raised the question of how good Flu Trends is at predicting future epidemics, as opposed to merely finding patterns in past data.
In a new analysis, published in the journal PLOS Computational Biology, US researchers report that there are “substantial errors in Google Flu Trends estimates of influenza timing and intensity”. This is based on comparison of Google Flu Trends predictions and the actual epidemic data at the national, regional and local level between 2003 and 2013
Even when search behaviour was correlated with influenza cases, the model sometimes misestimated important public health metrics such as peak outbreak size and cumulative cases. The predictions were particularly wide of the mark in 2009 and 2012:

Original and updated Google Flu Trends (GFT) model compared with CDC influenza-like illness (ILI) data. PLOS Computational Biology 9:10
Click to enlarge

Although they criticised certain aspects of the Flu Trends model, the researchers think that monitoring internet search queries might yet prove valuable, especially if it were linked with other surveillance and prediction methods.
Other researchers have also suggested that other sources of digital data – from Twitter feeds to mobile phone GPS – have the potential to be useful tools for studying epidemics. As well as helping to analysing outbreaks, such methods could allow researchers to analyse human movement and the spread of public health information (or misinformation).
Although much attention has been given to web-based tools, there is another type of big data that is already having a huge impact on disease research. Genome sequencing is enabling researchers to piece together how diseases transmit and where they might come from. Sequence data can even reveal the existence of a new disease variant: earlier this week, researchers announced a new type of dengue fever virus….”

Making regulations easier to use


at the Consumer Financial Protection Bureau (CFPB): “We write rules to protect consumers, but what actually protects consumers is people: advocates knowing what rights people have, government agencies’ supervision and enforcement staff having a clear view of what potential violations to look out for; and responsible industry employees following the rules.
Today, we’re releasing a new open source tool we built, eRegulations, to help make regulations easier to understand. Check it out: consumerfinance.gov/eregulations
One thing that’s become clear during our two years as an agency is that federal regulations can be difficult to navigate. Finding answers to questions about a regulation is hard. Frequently, it means connecting information from different places, spread throughout a regulation, often separated by dozens or even hundreds of pages. As a result, we found people were trying to understand regulations by using paper editions, several different online tools to piece together the relevant information, or even paid subscription services that still don’t make things easy, and are expensive.

Here’s hoping that even more people who work with regulations will have the same reaction as this member of our bank supervision team:
 “The eRegulations site has been very helpful to my work. It has become my go-to resource on Reg. E and the Official Interpretations. I use it several times a week in the course of completing regulatory compliance evaluations. My prior preference was to use the printed book or e-CFR, but I’ve found the eRegulations (tool) to be easier to read, search, and navigate than the printed book, and more efficient than the e-CFR because of the way eRegs incorporates the commentary.”
New rules about international money transfers – also called “remittances” –  in Regulation E will take effect on October 28, 2013, and you can now use the eRegulations tool to check out the regulation.

We need your help

There are two ways we’d love your help with our work to make regulations easier to use. First, the tool is a work in progress.  If you have comments or suggestions, please write to us at CFPB_eRegs_Team@cfpb.gov. We read every message and would love to hear what you think.
Second, the tool is open source, so we’d love for other agencies, developers, or groups to use it and adapt it. And remember, the first time a citizen developer suggested a change to our open source software, it was to fix a typo (thanks again, by the way!), so no contribution is too small.”

Global Collective Intelligence in Technological Societies


Paper by Juan Carlos Piedra Calderón and Javier Rainer in the International Journal of Artificial Intelligence and Interactive Multimedia: “The big influence of Information and Communication Technologies (ICT), especially in area of construction of Technological Societies has generated big
social changes. That is visible in the way of relating to people in different environments. These changes have the possibility to expand the frontiers of knowledge through sharing and cooperation. That has meaning the inherently creation of a new form of Collaborative Knowledge. The potential of this Collaborative Knowledge has been given through ICT in combination with Artificial Intelligence processes, from where is obtained a Collective Knowledge. When this kind of knowledge is shared, it gives the place to the Global Collective Intelligence”.

Information Now: Open Access and the Public Good


Podcast from SMARTech (Georgia Tech): “Every year, the international academic and research community dedicates a week in October to discuss, debate, and learn more about Open Access. Open Access in the academic sense refers to the free, immediate, and online access to the results of scholarly research, primarily academic, peer-reviewed journal articles. In the United States, the movement in support of Open Access has, in the last decade, been growing dramatically. Because of this growing interest in Open Access, a group of academic librarians from the Georgia Tech library, Wendy Hagenmaier (Digital Collections Archivist), Fred Rascoe (Scholarly Communication Librarian), and Lizzy Rolando (Research Data Librarian), got together to talk to folks in the thick of it, to try and unravel some of the different concerns and benefits of Open Access. But we didn’t just want to talk about Open Access for journal articles – we wanted to examine more broadly what it means to be “open”, what is open information, and what relationship open information has to the public good. In this podcast, we talk with different people who have seen and experienced open information and open access in practice. In the first act, Dan Cohen from the DPLA speaks about efforts to expand public access to archival and library collections. In the second, we’ll hear an argument from Christine George about why things sometimes need to be closed, if we want them to be open in the future. Third, Kari Watkins speaks about specific example of when a government agency decided, against legitimate concerns, to make transit data open, and why it worked for them. Fourth, Peter Suber from Harvard University will give us the background on the Open Access movement, some myths that have been dispelled, and why it is important for academic researchers to take the leap to make their research openly accessible. And finally, we’ll hear from Michael Chang, a researcher who did take that leap and helped start an Open Access journal, and why he sees openness in research as his obligation.”

See also Personal Guide to Open Access

Are We Puppets in a Wired World?


Sue Halpern in The New York Review of Books: “Also not obvious was how the Web would evolve, though its open architecture virtually assured that it would. The original Web, the Web of static homepages, documents laden with “hot links,” and electronic storefronts, segued into Web 2.0, which, by providing the means for people without technical knowledge to easily share information, recast the Internet as a global social forum with sites like Facebook, Twitter, FourSquare, and Instagram.
Once that happened, people began to make aspects of their private lives public, letting others know, for example, when they were shopping at H+M and dining at Olive Garden, letting others know what they thought of the selection at that particular branch of H+M and the waitstaff at that Olive Garden, then modeling their new jeans for all to see and sharing pictures of their antipasti and lobster ravioli—to say nothing of sharing pictures of their girlfriends, babies, and drunken classmates, or chronicling life as a high-paid escort, or worrying about skin lesions or seeking a cure for insomnia or rating professors, and on and on.
The social Web celebrated, rewarded, routinized, and normalized this kind of living out loud, all the while anesthetizing many of its participants. Although they likely knew that these disclosures were funding the new information economy, they didn’t especially care…
The assumption that decisions made by machines that have assessed reams of real-world information are more accurate than those made by people, with their foibles and prejudices, may be correct generally and wrong in the particular; and for those unfortunate souls who might never commit another crime even if the algorithm says they will, there is little recourse. In any case, computers are not “neutral”; algorithms reflect the biases of their creators, which is to say that prediction cedes an awful lot of power to the algorithm creators, who are human after all. Some of the time, too, proprietary algorithms, like the ones used by Google and Twitter and Facebook, are intentionally biased to produce results that benefit the company, not the user, and some of the time algorithms can be gamed. (There is an entire industry devoted to “optimizing” Google searches, for example.)
But the real bias inherent in algorithms is that they are, by nature, reductive. They are intended to sift through complicated, seemingly discrete information and make some sort of sense of it, which is the definition of reductive.”
Books reviewed: