What Jelly Means


Steven Johnson: “A few months ago, I found this strange white mold growing in my garden in California. I’m a novice gardener, and to make matters worse, a novice Californian, so I had no idea what these small white cells might portend for my flowers.
This is one of those odd blank spots — I used the call them Googleholes in the early days of the service — where the usual Delphic source of all knowledge comes up relatively useless. The Google algorithm doesn’t know what those white spots are, the way it knows more computational questions, like “what is the top-ranked page for “white mold?” or “what is the capital of Illinois?” What I want, in this situation, is the distinction we usually draw between information and wisdom. I don’t just want to know what the white spots are; I want to know if I should be worried about them, or if they’re just a normal thing during late summer in Northern California gardens.
Now, I’m sure I know a dozen people who would be able to answer this question, but the problem is I don’t really know which people they are. But someone in my extended social network has likely experienced these white spots on their plants, or better yet, gotten rid of them.  (Or, for all I know, ate them — I’m trying not to be judgmental.) There are tools out there that would help me run the social search required to find that person. I can just bulk email my entire address book with images of the mold and ask for help. I could go on Quora, or a gardening site.
But the thing is, it’s a type of question that I find myself wanting to ask a lot, and there’s something inefficient about trying to figure the exact right tool to use to ask it each time, particularly when we have seen the value of consolidating so many of our queries into a single, predictable search field at Google.
This is why I am so excited about the new app, Jelly, which launched today. …
Jelly, if you haven’t heard, is the brainchild of Biz Stone, one of Twitter’s co-founders.  The service launches today with apps on iOS and Android. (Biz himself has a blog post and video, which you should check out.) I’ve known Biz since the early days of Twitter, and I’m excited to be an adviser and small investor in a company that shares so many of the values around networks and collective intelligence that I’ve been writing about since Emergence.
The thing that’s most surprising about Jelly is how fun it is to answer questions. There’s something strangely satisfying in flipping through the cards, reading questions, scanning the pictures, and looking for a place to be helpful. It’s the same broad gesture of reading, say, a Twitter feed, and pleasantly addictive in the same way, but the intent is so different. Scanning a twitter feed while waiting for the train has the feel of “Here we are now, entertain us.” Scanning Jelly is more like: “I’m here. How can I help?”

The Effective Use of Crowdsourcing in E-Governance


Paper by Jayakumar Sowmya and Hussain Shafiq Pyarali: “The rise of Web 2.0 paradigm has empowered the Internet users to share information and generate content on social networking and media sharing platforms such as wikis and blogs. The trend of harnessing the wisdom of public using Web 2.0 distributed networks through open calls is termed as ‘Crowdsourcing’. In addition to businesses, this powerful idea of using collective intelligence or the ‘wisdom of crowd’ applies to different situations, such as in governments and non-profit organizations which have started utilizing crowdsourcing as an essential problem -solving tool. In addition, the widespread and easy access to technologies such as the Internet, mobile phones and other communication devices has resulted in an exponential growth in the use of crowdsourcing for government policy advocacy, e-democracy and e-governance during the past decade. However, utilizing collective intelligence and efforts of public to find solutions to real life problems using web 2.0 tools does come with its share of associated challenges and limitations. This paper aims at identifying and examining the value-adding strategies which contribute to the success of crowdsourcing in e-governance. The qualitative case study analysis and emphatic design methodology are employed to evaluate the effectiveness of the identified strategic and functional components, by analyzing the characteristics of some of the notable cases of crowdsourcing in e-governance and the findings are tabulated and discussed. The paper concludes with the limitations and the implications for future research”.

Peer Production: A Modality of Collective Intelligence


New paper by Yochai Benkler, Aaron Shaw and Benjamin Mako Hill:  “Peer production is the most significant organizational innovation that has emerged from
Internet-mediated social practice and among the most a visible and important examples of collective intelligence. Following Benkler,  we define peer production as a form of open creation and sharing performed by groups online that: (1) sets and executes goals in a decentralized manner; (2) harnesses a diverse range of participant motivations, particularly non-monetary motivations; and (3) separates governance and management relations from exclusive forms of property and relational contracts (i.e., projects are governed as open commons or common property regimes and organizational governance utilizes combinations of participatory, meritocratic and charismatic, rather than proprietary or contractual, models). For early scholars of peer production, the phenomenon was both important and confounding for its ability to generate high quality work products in the absence of formal hierarchies and monetary incentives. However, as peer production has become increasingly established in society, the economy, and scholarship, merely describing the success of some peer production projects has become less useful. In recent years, a second wave of scholarship has emerged to challenge assumptions in earlier work; probe nuances glossed over by earlier framings of the phenomena; and identify the necessary dynamics, structures, and conditions for peer production success.
Peer production includes many of the largest and most important collaborative communities on the Internet….
Much of this academic interest in peer production stemmed from the fact that the phenomena resisted straightforward explanations in terms of extant theories of the organization and production of functional information goods like software or encyclopedias. Participants in peer production projects join and contribute valuable resources without the hierarchical bureaucracies or strong leadership structures common to state agencies or firms, and in the absence of clear financial incentives or rewards. As a result, foundationalresearch on peer production was focused on (1) documenting and explaining the organization and governance of peer production communities, (2) understanding the motivation of contributors to peer production, and (3) establishing and evaluating the quality of peer production’s outputs.
In the rest of this chapter, we describe the development of the academic literature on peer production in these three areas – organization, motivation, and quality.”

Implementing Open Innovation in the Public Sector: The Case of Challenge.gov


Article by Ines Mergel and Kevin C. Desouza in Public Administration Review: “As part of the Open Government Initiative, the Barack Obama administration has called for new forms of collaboration with stakeholders to increase the innovativeness of public service delivery. Federal managers are employing a new policy instrument called Challenge.gov to implement open innovation concepts invented in the private sector to crowdsource solutions from previously untapped problem solvers and to leverage collective intelligence to tackle complex social and technical public management problems. The authors highlight the work conducted by the Office of Citizen Services and Innovative Technologies at the General Services Administration, the administrator of the Challenge.gov platform. Specifically, this Administrative Profile features the work of Tammi Marcoullier, program manager for Challenge.gov, and Karen Trebon, deputy program manager, and their role as change agents who mediate collaborative practices between policy makers and public agencies as they navigate the political and legal environments of their local agencies. The profile provides insights into the implementation process of crowdsourcing solutions for public management problems, as well as lessons learned for designing open innovation processes in the public sector”.

Global Collective Intelligence in Technological Societies


Paper by Juan Carlos Piedra Calderón and Javier Rainer in the International Journal of Artificial Intelligence and Interactive Multimedia: “The big influence of Information and Communication Technologies (ICT), especially in area of construction of Technological Societies has generated big
social changes. That is visible in the way of relating to people in different environments. These changes have the possibility to expand the frontiers of knowledge through sharing and cooperation. That has meaning the inherently creation of a new form of Collaborative Knowledge. The potential of this Collaborative Knowledge has been given through ICT in combination with Artificial Intelligence processes, from where is obtained a Collective Knowledge. When this kind of knowledge is shared, it gives the place to the Global Collective Intelligence”.

Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many


New book by Hélène Landemore: “Individual decision making can often be wrong due to misinformation, impulses, or biases. Collective decision making, on the other hand, can be surprisingly accurate. In Democratic Reason, Hélène Landemore demonstrates that the very factors behind the superiority of collective decision making add up to a strong case for democracy. She shows that the processes and procedures of democratic decision making form a cognitive system that ensures that decisions taken by the many are more likely to be right than decisions taken by the few. Democracy as a form of government is therefore valuable not only because it is legitimate and just, but also because it is smart.
Landemore considers how the argument plays out with respect to two main mechanisms of democratic politics: inclusive deliberation and majority rule. In deliberative settings, the truth-tracking properties of deliberation are enhanced more by inclusiveness than by individual competence. Landemore explores this idea in the contexts of representative democracy and the selection of representatives. She also discusses several models for the “wisdom of crowds” channeled by majority rule, examining the trade-offs between inclusiveness and individual competence in voting. When inclusive deliberation and majority rule are combined, they beat less inclusive methods, in which one person or a small group decide. Democratic Reason thus establishes the superiority of democracy as a way of making decisions for the common good.”

From Collective Intelligence to Collective Intelligence Systems


New Paper by A. Kornrumpf and U. Baumol in  the International Journal of Cooperative Information Systems: “Collective intelligence (CI) has become a popular research topic over the past few years. However, the CI debate suffers from several problems such as that there is no unanimously agreed-upon definition of CI that clearly differentiates between CI and related terms such as swarm intelligence (SI) and collective intelligence systems (CIS). Furthermore, a model of such CIS is lacking for purposes of research and the design of new CIS. This paper aims at untangling the definitions of CI and other related terms, especially CIS, and at providing a semi-structured model of CIS as a first step towards more structured research. The authors of this paper argue that CI can be defined as the ability of sufficiently large groups of individuals to create an emergent solution for a specific class of problems or tasks. The authors show that other alleged properties of CI which are not covered by this definition, are, in fact, properties of CIS and can be understood by regarding CIS as complex socio-technical systems (STS) that enable the realization of CI. The model defined in this article serves as a means to structure open questions in CIS research and helps to understand which research methodology is adequate for different aspects of CIS.”

The role of task difficulty in the effectiveness of collective intelligence


New article by Christian Wagner: “The article presents a framework and empirical investigation to demonstrate the role of task difficulty in the effectiveness of collective intelligence. The research contends that collective intelligence, a form of community engagement to address problem solving tasks, can be superior to individual judgment and choice, but only when the addressed tasks are in a range of appropriate difficulty, which we label the “collective range”. Outside of that difficulty range, collectives will perform about as poorly as individuals for high difficulty tasks, or only marginally better than individuals for low difficulty tasks. An empirical investigation with subjects randomly recruited online supports our conjecture. Our findings qualify prior research on the strength of collective intelligence in general and offer preliminary insights into the mechanisms that enable individuals and collectives to arrive at good solutions. Within the framework of digital ecosystems, the paper argues that collective intelligence has more survival strength than individual intelligence, with highest sustainability for tasks of medium difficulty”

Manipulation Among the Arbiters of Collective Intelligence: How Wikipedia Administrators Mold Public Opinion


New paper by Sanmay Das, Allen Lavoie, and Malik Magdon-Ismail: “Our reliance on networked, collectively built information is a vulnerability when the quality or reliability of this information is poor. Wikipedia, one such collectively built information source, is often our first stop for information on all kinds of topics; its quality has stood up to many tests, and it prides itself on having a “Neutral Point of View”. Enforcement of neutrality is in the hands of comparatively few, powerful administrators. We find a surprisingly large number of editors who change their behavior and begin focusing more on a particular controversial topic once they are promoted to administrator status. The conscious and unconscious biases of these few, but powerful, administrators may be shaping the information on many of the most sensitive topics on Wikipedia; some may even be explicitly infiltrating the ranks of administrators in order to promote their own points of view. Neither prior history nor vote counts during an administrator’s election can identify those editors most likely to change their behavior in this suspicious manner. We find that an alternative measure, which gives more weight to influential voters, can successfully reject these suspicious candidates. This has important implications for how we harness collective intelligence: even if wisdom exists in a collective opinion (like a vote), that signal can be lost unless we carefully distinguish the true expert voter from the noisy or manipulative voter.”

Social Influence Bias: A Randomized Experiment


New paper in Science: “Our society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32% and created accumulating positive herding that increased final ratings by 25% on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future.”
See also: ‘Like’ This Article Online? Your Friends Will Probably Approve, Too, Scientists Say