Data Fiduciary


ˈdeɪtə fəˈduʃiˌɛri

A person or a business that manages individual data in a trustworthy manner. Also ‘information fiduciary’, ‘data trust’, or ‘data steward’.

‘Fiduciary’ is an old concept in the legal world. Its latin origin is fidere, which means to trust. In the legal context, a fiduciary is usually a person that is trusted to make a decision on how to manage an asset or information, within constraints given by another person who owns such asset or information. Examples of a fiduciary relationship include homeowner and property manager, patient and doctor, or client and attorney. The latter having the ability to make decisions about the trusted asset that fall within the conditions agreed by the former.

Jack M. Balkin and Jonathan Zittrain wrote a case for “information fiduciary”, in which they pointed out the urgency of adopting the practice of fiduciary in the data space. In the Atlantic, they wrote:

“The information age has created new kinds of entities that have many of the trappings of fiduciaries—huge online businesses, like Facebook, Google, and Uber, that collect, analyze, and use our personal information—sometimes in our interests and sometimes not. Like older fiduciaries, these businesses have become virtually indispensable. Like older fiduciaries, these companies collect a lot of personal information that could be used to our detriment. And like older fiduciaries, these businesses enjoy a much greater ability to monitor our activities than we have to monitor theirs. As a result, many people who need these services often shrug their shoulders and decide to trust them. But the important question is whether these businesses, like older fiduciaries, have legal obligations to be trustworthy. The answer is that they should.”

Recent controversy involving Facebook data and Cambridge Analytica provides another reason for why companies collecting data from users need to act as a fiduciary. Within this framework, individuals would have a say over how and where their data can be used.

Another call for a form of data fiduciary comes from Google’s Sidewalk Labs project in Canada. After collecting data to inform urban planning in Quayside area in Toronto, Sidewalk Labs announced that they won’t be claiming ownership over the data that they collected and that the data should be “under the control of an independent Civic Data Trust.”

In a blog post, Sidewalk Labs wrote that:

“Sidewalk Labs believes an independent Civic Data Trust should become the steward of urban data collected in the physical environment. This Trust would approve and control the collection of, and manage access to, urban data originating in Quayside. The Civic Data Trust would be guided by a charter ensuring that urban data is collected and used in a way that is beneficial to the community, protects privacy, and spurs innovation and investment.”

Realizing the potential of creating new public value through an exchange of data, or data collaboratives, the GovLab “ is advancing the concept and practice of Data Stewardship to promote responsible data leadership that can address the challenges of the 21st century.” A Data Steward mirrors some of the responsibilities of a data fiduciary, in that she/he is “responsible for determining what, when, how and with whom to share private data for public good.”

Balkin and Zittrain suggest that there is an asymmetrical power between companies that collect user generated data and the users themselves, in that these companies are becoming indispensable and having more control over individuals data. However, these companies are currently not legally obligated to be trustworthy, meaning that there is no legal consequence for when they use this data in a way that breach privacy or in the least interest of the customers.

Under a data fiduciary framework, individuals who are trusted with data are attached with legal rights and responsibilities regarding the use of the data. In a case where a breach of trust happens, the trustee will have to face legal consequences.

More information:

Commonism


ˈkɑmənɪz(ə)m

“a new radical, practice-based ideology […] based on the values of sharing, common (intellectual) ownership and new social co-operations.”

Distinctive, yet with perhaps an interesting hint, from “communism”, the term “Commonism” was first coined by Tom DeWeese, the president of the American Policy Center yet more recently redefined in a new book “Commonism: A New Aesthetics of the Real” edited by Nico Dockx and Pascal Gielen

According to their introduction:

“After half a century of neoliberalism, a new radical, practice-based ideology is making its way from the margins: commonism, with an o in the middle. It is based on the values of sharing, common (intellectual) ownership and new social co-operations. Commoners assert that social relationships can replace money (contract) relationships. They advocate solidarity and they trust in peer-to-peer relationships to develop new ways of production.

“Commonism maps those new ideological thoughts. How do they work and, especially, what is their aesthetics? How do they shape the reality of our living together? Is there another, more just future imaginable through the commons? What strategies and what aesthetics do commoners adopt? This book explores this new political belief system, alternating between theoretical analysis, wild artistic speculation, inspiring art examples, almost empirical observations and critical reflection.”

In an interview excerpted from the book, author Gielen, Vrije Universiteit Brussel professor Sonja Lavaert, and the philosopher Antonio Negri discuss how commonism has the ability to transcend the ideological spectrum. The commons, regardless of political leanings, collaborate to “[re-appropriate] that of which they were robbed by capital.” Examples put forward in the interview include “liberal politicians write books about the importance of the basic income; neonationalism presents itself as a longing for social cohesion; religiously inspired political parties emphasize communion and the community, et cetera.”

In another piece, Louis Volont and Walter van Andel, both of the Culture Commons Quest Office, argue that an application of commonism can be found in blockchain. They argue that Blockchain’s attributes are capable of addressing the three elements of the tragedy of the commons, which are “overuse, (absence of) communication, and scale”. Further, its decentralization feature enables a “common” creation of value.

Although, the authors caution of a potential tragedy of blockchain by asserting that:

“But what would happen when that one thing that makes the world go around – money (be it virtual, be it actual) – enters the picture? One does not need to look far: many cryptocurrencies, Bitcoin among them, are facilitated by blockchain technology. Even though it is ‘horizontally organized’, ‘decentralized’ or ‘functioning beyond the market and the state’, the blockchain-facilitated experiment of virtual money relates to nothing more than exchange value. Indeed, the core question one should ask when speculating on the potentialities of the blockchain experiment, is whether it is put to use for exchange value on the one hand, or for use value on the other. The latter, still, is where the commons begin. The former (that is, the imperatives of capital and its incessant drive for accumulation through trade), is where the blockchain mutates from a solution to a tragedy, to a comedy in itself.”

Mechanistic Evidence


There has been mounting pressure on policymakers to adopt and expand the concept of evidence-based policy making (EBP).

In 2017, the U.S. Commission on Evidence-Based Policymaking issued a report calling for a future in which “rigorous evidence is created efficiently, as a routine part of government operations, and used to construct effective public policy.” The report asserts that modern technology and statistical methods, “combined with transparency and a strong legal framework, create the opportunity to use data for evidence building in ways that were not possible in the past.”

Similarly, the European Commission’s 2015 report on Strengthening Evidence Based Policy Making through Scientific Advice states that policymaking “requires robust evidence, impact assessment and adequate monitoring and evaluation,” emphasizing the notion that “sound scientific evidence is a key element of the policy-making process, and therefore science advice should be embedded at all levels of the European policymaking process.” That same year, the Commission’s Data4Policy program launched a call for contributions to support its research:

“If policy-making is ‘whatever government chooses to do or not to do’ (Th. Dye), then how do governments actually decide? Evidence-based policy-making is not a new answer to this question, but it is constantly challenging both policy-makers and scientists to sharpen their thinking, their tools and their responsiveness.”

Yet, while the importance and value of EBP is well established, the question of how to establish evidence is often answered by referring to randomized controlled trials (RCTs), cohort studies, or case reports. According to Caterina Marchionni and Samuli Reijula these answers overlook the important concept of mechanistic evidence.

Their paper takes a deeper dive into the differences between statistical and mechanistic evidence:

“It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so.”

The paper further argues that in order to make effective decisions, policymakers must take both statistical and mechanistic evidence into account:

“… whereas statistical studies provide evidence that the policy variable, X, makes a difference to the policy outcome, Y, mechanistic evidence gives information about either the existence or the nature of a causal mechanism connecting the two; in other words, about the entities and activities mediating the XY relationship. Both types of evidence, it is argued, are required to establish causal claims, to design and interpret statistical trials, and to extrapolate experimental findings.”

Ultimately Marchionni and Reijula take a closer look at why introducing research methods that beyond RCTs is crucial for evidence-based policymaking:

“The evidence-based policy (EBP) movement urges policymakers to select policies on the basis of the best available evidence that they work. EBP utilizes evidence-ranking schemes to evaluate the quality of evidence in support of a given policy, which typically prioritize meta-analyses and randomized controlled trials (henceforth RCTs) over other evidence-generating methods.”

They go on to explain that mechanistic evidence has been placed “at the bottom of the evidence hierarchies,” while RCTs have been considered the “gold standard.”

Evidence Hierarchy — American Journal of Clinical Nutrition

However, the paper argues, mechanistic evidence is in fact as important as statistical evidence:

“… evidence-based policy nearly always involves predictions about the effectiveness of an intervention in populations other than those in which it has been tested. Such extrapolative inferences, it is argued, cannot be based exclusively on the statistical evidence produced by methods higher up in the hierarchies.”

Some further readings on mechanistic evidence:

Social Physics


Merriam-Webster: “Social Physics: The quantitative study of human societysocial statistics”

When the US government announced in 2012 that it would invest $200 million in research grants and infrastructure building for big data in 2012, Farnam Jahanian, chief of the National Science Foundation’s Computer and Information Science and Engineering Directorate, stated that “Big data” has the power to change scientific research from a hypothesis-driven field to one that’s data-driven”.  Using big data to provide more evidence based ways ways of understanding human behavior is the mission of Alex (Sandy)Pentland, director of MIT’s Human Dynamics Laboratory. Pentland’s latest book illustrates the potential of what he describes as “Social Physics”.

The term was initially developed by Adolphe Jacques Quetelet, the Belgian socioligist and mathematician who introduced statistical methods to the social sciences. Quetelet expanded his views to develop a social physics in his book “Sur l’homme sur le developpement de ses facultes, ou Essai de physique sociale”. Auguste Comte, who coined “sociology” adopted the term (in his Positive Philosophy Volume Social Physics) when he defined sociology as a study that was just as important as biology and chemistry.

According to Sandy Pentland Social Physics is about idea flow, the way human social networks spread ideas and transform those ideas into behaviors. His book consequently aims to “extends economic and political thinking by including not only competitive forces but also exchanges of ideas, information, social pressure, and social status in order to more fully explain human behavior… Only once we understand how social interactions work together with competitive forces can we hope to ensure stability and fairness in our hyperconnected, networked society.”

The launch of the book is accompanied with a website that connects several scholars and explains the term further: “How can we create organizations and governments that are cooperative, productive, and creative? These are the questions of social physics, and they are especially important right now, because of global competition, environmental challenges, and government failure. The engine that drives social physics is big data: the newly ubiquitous digital data that is becoming available about all aspects of human life. By using these data with to build a predictive, computational theory of human behavior we can hope to engineer better social systems.”

Also check out the video below:

https://web.archive.org/web/2000/https://youtu.be/yv5bxqQG5xI

Socialstructing


Marina Gorbis, executive director of the Institute for the Future (IFTF),  released a book entitled The Nature of the Future: Dispatches from the Socialstructed World. According to the IFTF website, the book “offers an inspiring portrayal of how new technologies are giving individuals so much power to connect and share resources that networks of individuals—not big organizations—will solve a host of problems by reinventing business, education, medicine, banking, government, and scientific research.” In her review in the New York Journal of BooksGeri Spieler argues that, when focusing on the book’s central premise, Gorbis “breaks through to the reader as to what is important here: the future of a citizen-created world.”

In many ways, the book joins the growing literature on swarmswikinomicscommons-based and peer-to-peer production methods enabled by advances made in technology:

“Empowered by computing and communication technologies that have been steadily building village-like networks on a global scale, we are infusing more and more of our economic transactions with social connectedness….The new technologies are inherently social and personal. They help us create communities around interests, identities, and common personal challenges. They allow us to gain direct access to a worldwide community of others. And they take anonymity out of our economic transactions.”

Marina Gorbis subsequently describes the impact of these technologies on how we operate as “socialstructing”:

“We are moving away from the dominance of the depersonalized world of institutional production and creating a new economy around social connections and social rewards—a process I call socialstructing. … Not only is this new social economy bringing with it an unprecedented level of familiarity and connectedness to both our global and our local economic exchanges, but it is also changing every domain of our lives, from finance to education and health. It is rapidly ushering in a vast array of new opportunities for us to pursue our passions, create new types of businesses and charitable organizations, redefine the nature of work, and address a wide range of problems that the prevailing formal economy has neglected, if not caused.

Socialstructing is in fact enabling not only a new kind of global economy but a new kind of society, in which amplified individuals—individuals empowered with technologies and the collective intelligence of others in their social network—can take on many functions that previously only large organizations could perform, often more efficiently, at lower cost or no cost at all, and with much greater ease.”

Following a brief intro describing the social and technical drivers behind socialstructing the book describes its manifestation in finance, education, governance, science , and health.  In the chapter “governance beyond government”  the author advocates the creation of a revised “agora” modeled on the ancient Greek concept of participatory democracy. Of particular interest, the chapter describes and explains the legitimacy deficit of present-day political institutions and governmental structures:

“Political institutions are shaped by the social realities of their time and reflect the prevailing technological infrastructure, levels of knowledge, and citizen values. In ancient Athens, a small democratic state, it was possible to gather most citizens in an assembly or on a hill to practice a direct form of democracy, but in a country with millions of people this is nearly impossible. The US Constitution and governance structure emerged in the eighteenth century and were products of a Newtonian view of the universe….But while this framework of government  and society as machines worked reasonably well for several centuries, it is increasingly out of sync with today’s reality and level of knowledge.”

Building upon the deliberative polling process developed by Professor James Fishkin, director of the Center for Deliberative Democracy at Stanford University, the author proposes and develops four key elements behind the so-called socialstructed governance:

The chapter provides for an interesting introduction of the kind of new governance arrangements made feasible by increased computing power and the use of collaborative platforms. As with most literature on the subject, little attention however is paid to evidence on whether these new platforms contribute to a more legitimate and effective outcomes – a necessary next step to move away from “faith-based” discussions to more evidence based interventions.

Slacktivism


Research featured in the New Scientists focuses on the impact of so-called “slacktivism”, or “low-cost, low-risk online activism”, on subsequent civic action.  A detailed analysis of  slacktivism was developed by Henrik Serup Christensen in his 2011 paper in First Monday where he defined the concept and its origin as follows:

“Slacktivism has become somewhat of a buzzword when it comes to demeaning the electronic versions of political participation. The origins of the term slacktivism is debated, but Fred Clark takes credit for using the term in 1995 in a seminar series held together with Dwight Ozard. However, they used it to shorten slacker activism, which refer to bottom up activities by young people to affect society on a small personal scale used. In their usage, the term had a positive connotation.

Today, the term is used in a more negative sense to belittle activities that do not express a full–blown political commitment. The concept generally refer to activities that are easily performed, but they are considered more effective in making the participants feel good about themselves than to achieve the stated political goals. Slacktivism can take other expressions, such as wearing political messages in various forms on your body or vehicle, joining Facebook groups, or taking part in short–term boycotts such as Buy Nothing Day or Earth Hour.”

The research featured in the New Scientist comprises work by Yu-Hao Lee and Gary Hsieh, both from Michigan State University, who analyzed the effects of slacktivism following  (using the description of the New Scientist)  “the Colorado cinema shootings in 2012, which had prompted wide debate over access to firearms. Hsieh’s team recruited 759 US participants from Amazon’s Mechanical Turk crowdsourcing marketplace and surveyed them for their position on gun control. They asked people if they would sign an e-petition to either ban assault rifles or expand access to guns. Some of the participants then had the opportunity to donate to a group that was pro or against gun control. Another group, including people from both sides of the gun debate, were asked to donate to an education charity.” Findings:

“We found that participants who signed the online petition were significantly more likely to donate money to a related charity, demonstrating a consistency effect. We also found that participants who did not sign the petition donated significantly more money to an unrelated charity , demonstrating a  moral balancing  effect. The results suggest that  exposure to an online activism influences individual decision on  subsequent civic actions.”

These two psychological effects provide additional insight on whether or not slacktivism is damaging real citizen engagement potentially replacing meaningful action – as suggested in the below UNICEF video – part of a series titled “Likes Don’t Save Lives”:

https://web.archive.org/web/2000/https://youtu.be/QcSZsjlqs4E

Working Anarchy / Peer Mutualism


Since the 1990s, Yochai Benkler, who is the Berkman Professor of Entrepreneurial Legal Studies at Harvard Law School, has been instrumental in documenting (and advocating for) the economic and societal value of an information commons and decentralized ways of collaboration, especially as it applies to innovation. Both his books “The Penguin and the Leviathan: How Cooperation Triumphs over Self-Interest” (Crown 2011); and “The Wealth of Networks: How Social Production Transforms Markets and Freedom” (Yale University Press, 2006) are required reading to anyone interested in social networks, open innovation and participatory democracy.

Politics & Society carries a new paper by Prof. Benkler entitled “Practical Anarchism : Peer Mutualism, Market Power, and the Fallible State”. The paper considers “several working anarchies in the networked environment, and whether they offer a model for improving on the persistent imperfections of markets and states”. In particular, Prof. Benkler tries to capture and analyze our growing experience with what he calls

“peer mutualism: voluntaristic cooperation that does not depend on exclusive proprietary control or command relations as among the cooperators, and in many instances not even as common defense for the cooperators against nonparticipants.”

Later in the paper he describes “working anarchy”  or “peer mutualism” as:

“ voluntaristic associations that do not depend on direct or delegated power from the state, and in particular do not depend on delegated legitimate force that takes a proprietary form and is backed by shared social understandings of how one respects or complies with another’s proprietary claim.”

According to Benkler, – using an “utopian” position – these working anarchies have the potential to produce four effects:

  • “First, they offer their participants a chunk of life lived in effective, voluntary cooperation with others.

  • Second, they can provide for everyone a degree of freedom in a system otherwise occupied by state- and property-based capabilities; they do not normally displace these other systems, but they do offer a dimension along which, at least for that capability and its dependencies, we are not fully subject to power transmitted through either direct state control or the property system.

  • Third, they provide a context for the development of virtue; or the development of a cooperative human practice, for ourselves and with each other.

  • And fourth, they provide a new way of imagining who we are, and who we can be; a cluster of practices that allow us to experience and observe ourselves as cooperative beings, capable of mutual aid, friendship, and generosity, rather than as the utility-seeking, self-interested creatures that have occupied so much of our imagination from Hobbes to the neoclassical models whose cramped vision governs so much of our lives.”

The central purpose of the paper is to examine the above value proposition behind peer mutualism, using two key questions:

  • “First, there is the internal question of whether these models can sustain their nonhierarchical, noncoercive model once they grow and mature, or whether power relations generally, and in particular whether systematically institutionalized power: hierarchy, property, or both, reemerges in these associations.

  • The second question is whether those practices we do see provide a pathway for substantial expansion of the domains of life that can be lived in voluntaristic association, rather than within the strictures of state and hierarchical systems. In other words, do mutualistic associations offer enough of a solution space, to provisioning a sufficient range of the capabilities we require for human flourishing, to provide a meaningful alternative model to the state and the market across a significant range of human needs and activities?”

The paper subsequently reviews various “working anarchies” – ranging from the so-called paradigm cases involving IETF, FOSS and Wikipedia to more recent cases of peer mutualism involving, for instance, Kickstarter, Kiva, Ushahidi, Open Data and Wikileaks. The emerging insight from the comparative selection is that all the examples examined:

“are perfect on neither dimension. Internally, hierarchy and power reappear, to some extent and in some projects, although they are quite different than the hierarchy of government or corporate organization. Externally, there are some spectacular successes, some failures to thrive, and many ambiguous successes. In all, present experience supports neither triumphalism nor defeatism in the utopian project. Peer models do work, and they do provide a degree of freedom in the capabilities they provide. But there is no inexorable path to greater freedom through voluntary open collaboration. There is a good deal of uncertainty and muddling through.”

Despite the uncertainties and imperfections, Prof. Benkler advocates…

“to continue to build more of the spectacular or moderate successes, and to try to colonize as much of our world as possible with the mutualistic modality of social organization. It doesn’t have to be perfect; it merely needs to offer a new dimension or sufficient diversity in how it instantiates capabilities and transmits power to offer us, who inhabit the systems that these peer systems perturb, a degree of freedom.”

Cognitive Democracy


NYU had a LaPietra Dialogue on “Social Media and Political Participation” (#SMaPP_LPD). The purpose of the dialogue:

“We are only beginning to scratch the surface of developing theories linking social media usage to political participation and actually beginning to test causal relationships. At the same time, the data being generated by users of social media represents a completely unprecedented source of data recording how hundreds of millions of people around the globe interact with politics, the likes of which social scientists have never, ever seen; it is not too much of a stretch to say we are at a similar place to the field of biology just as the human genome was first being decoded. Thus the challenges are enormous, but the opportunities – and importance of the task – are just as important….The conference will serve to introduce cutting edge work being conducted in a field that barely existed five years ago to the public and students, to introduce the scholars participating in the conference to each other’s work, and also to play a role in building connections among the scholarly community working in this field.”

Among the presenters was Henry Farrell from George Washington University who drafted a paper with Cosma Shalizi on Cognitive Democracy and the Internet (an earlier version appeared on the Crooked Timber Blog).

In essence, the paper is focused on which social institutions (hierarchies, markets or democracies) are better positioned to solve complex problems (resonating with GovLab Research’s mapping of contemporary problems that drives gov innovation) .

“We start instead with a pragmatist question whether these institutions are useful in helping us solve diifficult social problems. Some political problems are simple: the solutions might not be easy to put into practice, but the problems are easy to analyze. But the most vexing problems are usually ones without any very obvious solutions. How do we change legal rules and social norms in order to mitigate the problems of global warming? How do we regulate financial markets so as to minimize the risk of new crises emerging, and limit the harm of those that happen? How do we best encourage the spread of human rights internationally?

These problems all share two important features. First, they are social. That is, they are problems which involve the interaction of many human beings, with different interests, desires, needs and perspectives. Second, they are complex problems, in the sense that scholars of complexity understand the term. To borrow the defi nition of Page (2011, p. 25), they involve diverse entities that interact in a network or contact structure (italics in the original).”

They subsequently critique the capacity of hierarchies and markets to address these “social problems”. Of particular interest is their assessment of the current “nudge” theories:

“Libertarian paternalism is flawed, not because it restricts peoples’ choices, but because it makes heroic assumptions about choice architects’ ability to figure out what the choices should be, and blocks the architects’ channels for learning better. Libertarian paternalism may still have value where people likely do want, e.g., to save more or take more exercise, but face commitment problems, or when other actors have an incentive to misinform these people or to structure their choices in perverse ways in the absence of a “good” default. However, it will be far less useful, or even actively pernicious, in complex situations, where many actors with different interests make interdependent choices”

The bulk of the paper focuses on the value and potential of democracy to solve problems (where diversity has a high premium). With regard to current state of our democratic institutions the paper observes that

“We have no reason to think that actually-existing democratic structures are as good as they could be, or even close. If nothing else, designing institutions is, itself, a highly complex problem, where even the most able decision-makers have little ability to foresee the consequences of their actions. Even when an institution works well at one time, it does so in a context of other institutions and social and physical conditions, which are all constantly changing. Institutional design and reform, then, is always a matter of more or less ambitious “piecemeal social experiments”, to use the phrase of Popper…As emphasized by Popper, and independently by Knight and Johnson, one of the strengths of democracy is its ability to make, monitor, and learn from such experiments”.

Taking into account current advances in technology, Farrell and Shalizi state:

“One of the great aspects of the current moment, for cognitive democracy, is that it has become (comparatively) very cheap and easy for such experiments to be made online, so that this design space can be explored”

They subsequently conclude emphasizing the need for “cognitive democracy” :

“Democracy, we have argued, has a capacity unmatched among other macro-institutions to actually experiment, and to make use of cognitive diversity in solving complex problems. To realize these potentials, democratic structures must themselves be shaped so that social interaction and cognitive function reinforce each other. But the cleverest institutional design in the world will not help unless the resources | material, social, cultural | needed for participation are actually broadly shared. This is not, or not just, about being nice or equitable; cognitive diversity is not something we can a fford to waste.”

Churnalism


The Sunlight Foundation and the Media Standards Trust launched yesterday Churnalism US which is “a new web tool and browser extension that allows anyone to compare the news you read against existing content to uncover possible instances of plagiarism” (churned from their blog post).

The new tool is inspired by the UK site Churnalism.com (a project of the Media Standards Trust). According to the FAQ of Churnalism.com:

“Churnalism’ is a news article that is published as journalism, but is essentially a press release without much added. In his landmark book, Flat Earth NewsNick Davies wrote how ‘churnalism’ is produced by:

“Journalists who are no longer gathering news but are reduced instead to passive processors of whatever material comes their way, churning out stories, whether real event or PR artifice, important or trivial, true or false” (p.59). According to the Cardiff University research that informed Davies’ book, 54% of news articles have some form of PR in them. The word ‘churnalism’ has been attributed to BBC journalist Waseem Zakir.

Of course not all churnalism is bad. Some press releases are clearly in the public interest (medical breakthroughs, government announcements, school closures and so on). But even in these cases, it is better that people should know what press release the article is based on than for the source of the article to remain hidden.”

In a detailed blog post Drew Vogel, a developer on Churnalism US, explains the ‘nuts and bolts‘ behind the site which is fueled by a full text search database named SuperFastMatch.

Kaitlin Devine, another developer on Churnalism, provides a two minute tutorial on how Churnalism US works:

https://web.archive.org/web/2000/https://youtu.be/6fvADRst_YM

Cognitive Overhead


In earlier posts we have reviewed Cass Sunstein’s latest book on the need for government to simplify processes as to be more effective and participatory. David Lieb, co-founder and CEO of Bump, recently expanded upon this call for simplicity in a blog post at TechCrunch, arguing that anyone trying to engage with the public “should first and foremost minimize the Cognitive Overhead of their products, even though it often comes at the cost of simplicity in other areas”

When explaining what Cognitive Overhead means, David Lieb uses the definition coined by web designer and engineer in Chicago David Demaree:

Cognitive Overhead — “how many logical connections or jumps your brain has to make in order to understand or contextualize the thing you’re looking at.”

David Lieb: “Minimizing cognitive overhead is imperative when designing for the mass market. Why? Because most people haven’t developed the pattern matching machinery in their brains to quickly convert what they see in your product (app design, messaging, what they heard from friends, etc.) into meaning and purpose.”

In many ways, the concept resonates with the so-called “Cognitive Load Theory” (CLT) which taps into educational psychology and has been used widely for the design of multimedia and other learning materials (to prevent over-load).  CLT focuses on the best conditions that are aligned with human cognitive architecture (where short term memory is limited in the number of elements it can contain simultaneously). John Sweller, the founder of CLT, and others have therefore focused on the role of acquiring schemata (mind maps) to learn.

So how can we provide for cognitive simplicity? According to Lieb:

  • “Put the user “in the middle of your flow. Make them press an extra button, make them provide some inputs, let them be part of the service-providing, rather than a bystander to it.”;
  • Give the user real-time feedback;
  • Slow down provisioning. Studies have shown that intentionally slowing down results on travel search websites can actually increase perceived user value — people realize and appreciate that the service is doing a lot of work searching all the different travel options on their behalf.”

It seems imperative that anyone who wants to engage with the public (to tap into the “cognitive surplus” (Clay Shirky) of the crowd) must focus – when for instance defining the problem that needs to be solved –  on the cognitive overhead of their engagement platform and message.