Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many


New book by Hélène Landemore: “Individual decision making can often be wrong due to misinformation, impulses, or biases. Collective decision making, on the other hand, can be surprisingly accurate. In Democratic Reason, Hélène Landemore demonstrates that the very factors behind the superiority of collective decision making add up to a strong case for democracy. She shows that the processes and procedures of democratic decision making form a cognitive system that ensures that decisions taken by the many are more likely to be right than decisions taken by the few. Democracy as a form of government is therefore valuable not only because it is legitimate and just, but also because it is smart.
Landemore considers how the argument plays out with respect to two main mechanisms of democratic politics: inclusive deliberation and majority rule. In deliberative settings, the truth-tracking properties of deliberation are enhanced more by inclusiveness than by individual competence. Landemore explores this idea in the contexts of representative democracy and the selection of representatives. She also discusses several models for the “wisdom of crowds” channeled by majority rule, examining the trade-offs between inclusiveness and individual competence in voting. When inclusive deliberation and majority rule are combined, they beat less inclusive methods, in which one person or a small group decide. Democratic Reason thus establishes the superiority of democracy as a way of making decisions for the common good.”

From Collective Intelligence to Collective Intelligence Systems


New Paper by A. Kornrumpf and U. Baumol in  the International Journal of Cooperative Information Systems: “Collective intelligence (CI) has become a popular research topic over the past few years. However, the CI debate suffers from several problems such as that there is no unanimously agreed-upon definition of CI that clearly differentiates between CI and related terms such as swarm intelligence (SI) and collective intelligence systems (CIS). Furthermore, a model of such CIS is lacking for purposes of research and the design of new CIS. This paper aims at untangling the definitions of CI and other related terms, especially CIS, and at providing a semi-structured model of CIS as a first step towards more structured research. The authors of this paper argue that CI can be defined as the ability of sufficiently large groups of individuals to create an emergent solution for a specific class of problems or tasks. The authors show that other alleged properties of CI which are not covered by this definition, are, in fact, properties of CIS and can be understood by regarding CIS as complex socio-technical systems (STS) that enable the realization of CI. The model defined in this article serves as a means to structure open questions in CIS research and helps to understand which research methodology is adequate for different aspects of CIS.”

The role of task difficulty in the effectiveness of collective intelligence


New article by Christian Wagner: “The article presents a framework and empirical investigation to demonstrate the role of task difficulty in the effectiveness of collective intelligence. The research contends that collective intelligence, a form of community engagement to address problem solving tasks, can be superior to individual judgment and choice, but only when the addressed tasks are in a range of appropriate difficulty, which we label the “collective range”. Outside of that difficulty range, collectives will perform about as poorly as individuals for high difficulty tasks, or only marginally better than individuals for low difficulty tasks. An empirical investigation with subjects randomly recruited online supports our conjecture. Our findings qualify prior research on the strength of collective intelligence in general and offer preliminary insights into the mechanisms that enable individuals and collectives to arrive at good solutions. Within the framework of digital ecosystems, the paper argues that collective intelligence has more survival strength than individual intelligence, with highest sustainability for tasks of medium difficulty”

Smarter Than You Think: How Technology is Changing Our Minds for the Better


New book by Clive Thompson: “It’s undeniable—technology is changing the way we think. But is it for the better? Amid a chorus of doomsayers, Clive Thompson delivers a resounding “yes.” The Internet age has produced a radical new style of human intelligence, worthy of both celebration and analysis. We learn more and retain it longer, write and think with global audiences, and even gain an ESP-like awareness of the world around us. Modern technology is making us smarter, better connected, and often deeper—both as individuals and as a society.
In Smarter Than You Think Thompson shows that every technological innovation—from the written word to the printing press to the telegraph—has provoked the very same anxieties that plague us today. We panic that life will never be the same, that our attentions are eroding, that culture is being trivialized. But as in the past, we adapt—learning to use the new and retaining what’s good of the old.”

Inside Noisebridge: San Francisco’s eclectic anarchist hackerspace


at Gigaom: “Since its formation in 2007, Noisebridge has grown from a few people meeting in coffee shops to an overflowing space on Mission Street where members can pursue projects that even the maddest scientist would approve of…. When Noisebridge opened the doors of its first hackerspace location in San Francisco’s Mission district in 2008, it had nothing but a large table and few chairs found on the street.
Today, it looks like a mad scientist has been methodically hoarding tools, inventions, art, supplies and a little bit of everything else for five years. The 350 people who come through Noisebridge each week have a habit of leaving a mark, whether by donating a tool or building something that other visitors add to bit by bit. Anyone can be a paid member or a free user of the space, and over the years they have built it into a place where you can code, sew, hack hardware, cook, build robots, woodwork, learn, teach and more.
The members really are mad scientists. Anything left out in the communal spaces is fair game to “hack into a giant robot,” according to co-founder Mitch Altman. Members once took a broken down wheelchair and turned it into a brainwave-controlled robot named M.C. Hawking. Another person made pants with a built-in keyboard. The Spacebridge group has sent high altitude balloons to near space, where they captured gorgeous videos of the Earth. And once a month, the Vegan Hackers teach their pupils how to make classic fare like sushi and dumplings out of vegan ingredients….”

A collaborative way to get to the heart of 3D printing problems


PSFK: “Because most of us only see the finished product when it comes to 3D printing projects – it’s easy to forget that things can, and do, go wrong when it comes to this miracle technology.
3D printing is constantly evolving, reaching exciting new heights, and touching every industry you can think of – but all this progress has left a trail of mangled plastic, and a devastated machines in it’s wake.
The Art of 3D Print Failure is a Flickr group that aims to document this failure, because after all, mistakes are how we learn, and how we make sure the same thing doesn’t happen the next time around. It can also prevent mistakes from happening to those who are new to 3D printing, before they even make them!”

Manipulation Among the Arbiters of Collective Intelligence: How Wikipedia Administrators Mold Public Opinion


New paper by Sanmay Das, Allen Lavoie, and Malik Magdon-Ismail: “Our reliance on networked, collectively built information is a vulnerability when the quality or reliability of this information is poor. Wikipedia, one such collectively built information source, is often our first stop for information on all kinds of topics; its quality has stood up to many tests, and it prides itself on having a “Neutral Point of View”. Enforcement of neutrality is in the hands of comparatively few, powerful administrators. We find a surprisingly large number of editors who change their behavior and begin focusing more on a particular controversial topic once they are promoted to administrator status. The conscious and unconscious biases of these few, but powerful, administrators may be shaping the information on many of the most sensitive topics on Wikipedia; some may even be explicitly infiltrating the ranks of administrators in order to promote their own points of view. Neither prior history nor vote counts during an administrator’s election can identify those editors most likely to change their behavior in this suspicious manner. We find that an alternative measure, which gives more weight to influential voters, can successfully reject these suspicious candidates. This has important implications for how we harness collective intelligence: even if wisdom exists in a collective opinion (like a vote), that signal can be lost unless we carefully distinguish the true expert voter from the noisy or manipulative voter.”

Behold: A Digital Bill of Rights for the Internet, by the Internet


Mashable: “The digital rights conversation was thrust into the mainstream spotlight after news of ongoing, widespread mass surveillance programs leaked to the public. Always a hot topic, these revelations sparked a strong online debate among the Internet community.
It also made us here at Mashable reflect on the digital freedoms and protections we feel each user should be guaranteed as a citizen of the Internet. To highlight some of the great conversations taking place about digital rights online, we asked the digital community to collaborate with us on the creation of a crowdsourced Digital Bill of Rights.
After six weeks of public discussions, document updates and changes, as well as incorporating input from digital rights experts, Mashable is pleased to unveil its first-ever Digital Bill of Rights, made for the Internet, by the Internet.”
 

Social Influence Bias: A Randomized Experiment


New paper in Science: “Our society is increasingly relying on the digitized, aggregated opinions of others to make decisions. We therefore designed and analyzed a large-scale randomized experiment on a social news aggregation Web site to investigate whether knowledge of such aggregates distorts decision-making. Prior ratings created significant bias in individual rating behavior, and positive and negative social influences created asymmetric herding effects. Whereas negative social influence inspired users to correct manipulated ratings, positive social influence increased the likelihood of positive ratings by 32% and created accumulating positive herding that increased final ratings by 25% on average. This positive herding was topic-dependent and affected by whether individuals were viewing the opinions of friends or enemies. A mixture of changing opinion and greater turnout under both manipulations together with a natural tendency to up-vote on the site combined to create the herding effects. Such findings will help interpret collective judgment accurately and avoid social influence bias in collective intelligence in the future.”
See also: ‘Like’ This Article Online? Your Friends Will Probably Approve, Too, Scientists Say

A Videogame That Recruits Players to Map the Brain


Wired: “I’m no neuroscientist, and yet, here I am at my computer attempting to reconstruct a neural circuit of a mouse’s retina. It’s not quite as difficult and definitely not as boring as it sounds. In fact, it’s actually pretty fun, which is a good thing considering I’m playing a videogame.
Called EyeWire, the browser-based game asks players to map the connections between retinal neurons by coloring in 3-D slices of the brain. Much like any other game out there, being good at EyeWire earns you points, but the difference is that the data you produce during gameplay doesn’t just get you on a leader board—it’s actually used by scientists to build a better picture of the human brain.
Created by neuroscientist Sebastian Seung’s lab at MIT, EyeWire basically gamifies the professional research Seung and his collaborators do on a daily basis. Seung is studying the connectome, the hyper-complex tangle of connections among neurons in the brain.”