Abstract of new paper by Jeffrey V. Nickerson on Human-Based Evolutionary Computing (in Handbook of Human Computation, P. Michelucci, eds., Springer, Forthcoming): “Evolution explains the way the natural world changes over time. It can also explain changes in the artificial world, such as the way ideas replicate, alter, and merge. This analogy has led to a family of related computer procedures called evolutionary algorithms. These algorithms are being used to produce product designs, art, and solutions to mathematical problems. While for the most part these algorithms are run on computers, they also can be performed by people. Such human-based evolutionary algorithms are useful when many different ideas, designs, or solutions need to be generated, and human cognition is called for”
National Academies of Sciences: “Over the course of several decades, copyright protection has been expanded and extended through legislative changes occasioned by national and international developments. The content and technology industries affected by copyright and its exceptions, and in some cases balancing the two, have become increasingly important as sources of economic growth, relatively high-paying jobs, and exports. Since the expansion of digital technology in the mid-1990s, they have undergone a technological revolution that has disrupted long-established modes of creating, distributing, and using works ranging from literature and news to film and music to scientific publications and computer software.
In the United States and internationally, these disruptive changes have given rise to a strident debate over copyright’s proper scope and terms and means of its enforcement–a debate between those who believe the digital revolution is progressively undermining the copyright protection essential to encourage the funding, creation, and distribution of new works and those who believe that enhancements to copyright are inhibiting technological innovation and free expression.
Copyright in the Digital Era: Building Evidence for Policy examines a range of questions regarding copyright policy by using a variety of methods, such as case studies, international and sectoral comparisons, and experiments and surveys. This report is especially critical in light of digital age developments that may, for example, change the incentive calculus for various actors in the copyright system, impact the costs of voluntary copyright transactions, pose new enforcement challenges, and change the optimal balance between copyright protection and exceptions.”
The Guardian: “Since 2010 David Cameron’s pet project has been tasked with finding ways to improve society’s behaviour – and now the ‘nudge unit’ is going into business by itself. But have its initiatives really worked?….
The idea behind the unit is simpler than you might believe. People don’t always act in their own interests – by filing their taxes late, for instance, overeating, or not paying fines until the bailiffs call. As a result, they don’t just harm themselves, they cost the state a lot of money. By looking closely at how they make their choices and then testing small changes in the way the choices are presented, the unit tries to nudge people into leading better lives, and save the rest of us a fortune. It is politics done like science, effectively – with Ben Goldacre’s approval – and, in many cases, it appears to work….”
See also: Jobseekers’ psychometric test ‘is a failure’ (US institute that devised questionnaire tells ‘nudge’ unit to stop using it as it failed to be scientifically validated)
The last few years, we have seen a variety of experimentation with new ways to engage citizens in the decisions making process especially at the local or community level. Little is known however on what works and why. The National League of Cities, working with the John S. and James L. Knight Foundation, released a report today reviewing the impact of experimentation within 14 communities in the US, highlighting several “bright spots”. The so-called scans focus on four aspects of community engagement:
- The use of new tools and strategies
- The ability to reach a broad spectrum of people, including those not typically “engaged”
- Notable successes and outcomes
- Sustainable efforts to use a range of strategies
A slide-deck summarizing the findings of the report:
Paper by NetLab (Toronto University) scholars in the latest issue of the Journal of Computer-Mediated Communication: “We review the evidence from a number of surveys in which our NetLab has been involved about the extent to which the Internet is transforming or enhancing community. The studies show that the Internet is used for connectivity locally as well as globally, although the nature of its use varies in different countries. Internet use is adding on to other forms of communication, rather than replacing them. Internet use is reinforcing the pre-existing turn to societies in the developed world that are organized around networked individualism rather than group or local solidarities. The result has important implications for civic involvement.”
In earlier posts we have reviewed Cass Sunstein’s latest book on the need for government to simplify processes as to be more effective and participatory. David Lieb, co-founder and CEO of Bump, recently expanded upon this call for simplicity in a blog post at TechCrunch, arguing that anyone trying to engage with the public “should first and foremost minimize the Cognitive Overhead of their products, even though it often comes at the cost of simplicity in other areas”
When explaining what Cognitive Overhead means, David Lieb uses the definition coined by web designer and engineer in Chicago David Demaree:
Cognitive Overhead — “how many logical connections or jumps your brain has to make in order to understand or contextualize the thing you’re looking at.”
David Lieb: “Minimizing cognitive overhead is imperative when designing for the mass market. Why? Because most people haven’t developed the pattern matching machinery in their brains to quickly convert what they see in your product (app design, messaging, what they heard from friends, etc.) into meaning and purpose.”
In many ways, the concept resonates with the so-called “Cognitive Load Theory” (CLT) which taps into educational psychology and has been used widely for the design of multimedia and other learning materials (to prevent over-load). CLT focuses on the best conditions that are aligned with human cognitive architecture (where short term memory is limited in the number of elements it can contain simultaneously). John Sweller, the founder of CLT, and others have therefore focused on the role of acquiring schemata (mind maps) to learn.
So how can we provide for cognitive simplicity? According to Lieb:
- “Put the user “in the middle of your flow. Make them press an extra button, make them provide some inputs, let them be part of the service-providing, rather than a bystander to it.”;
- Give the user real-time feedback;
- Slow down provisioning. Studies have shown that intentionally slowing down results on travel search websites can actually increase perceived user value — people realize and appreciate that the service is doing a lot of work searching all the different travel options on their behalf.”
It seems imperative that anyone who wants to engage with the public (to tap into the “cognitive surplus” (Clay Shirky) of the crowd) must focus – when for instance defining the problem that needs to be solved – on the cognitive overhead of their engagement platform and message.
“The 4-24 Project is dedicated to rekindling the provocative power of asking the right questions in adults so they can pass this crucial creativity skill onto the next generation. By setting aside 4 minutes every 24 hours (or one full day each year) we, as adults, can become better at building the right questions that will unlock today’s vexing challenges. Our strengthened questioning capacity will hopefully help us cultivate and sharpen the curiosity of the world’s 1.85 billion children as they prepare for a lifetime of significant service.”