It takes more than social media to make a social movement


Hayley Tsukayama in the Washington Post: “President Trump may have used the power of social media to make his way into the White House, but now social media networks are showing that muscle can work for his opposition, too. Last week, more than 1 million marchers went to Washington and cities around the country — sparked by a Facebook post from one woman with no history of activism. This weekend, the Internet exploded again in discussion about Trump’s travel suspension order, and many used social media to get together and protest the decision.

Twitter said that more than 25 million tweets were sent about the order — as compared with 12 million about Trump’s inauguration. Facebook said that its users generated 151 million “likes, posts, comments and shares” related to the ban, less than the 208 million interactions generated about the inauguration. The companies didn’t reveal how many of those were aimed at organizing, but the social media calls to get people to protest are a testament to the power of these platforms to move people.

The real questionhowever, is whether this burgeoning new movement can avoid the fate of many so others kick-started by the power of social networks — only to find that it’s much harder to make political change than to make a popular hashtag….

Zeynep Tufekci, an associate professor at the University of North Carolina at Chapel Hill who has written a forthcoming book on the power and fragility of movements borne of social media, found in her research that the very ability for these movements to scale quickly is, in part, why they also can fall apart so quickly compared with traditional grass-roots campaigns….

Now, organizers can bypass the time it takes to build up the infrastructure for a massive march and all the publicity that comes with it. But that also means their high-profile movements skip some crucial organizing steps.

“Digitally networked movements look like the old movements. But by the time the civil rights movement had such a large march, they’d been working on [the issues] for 10 years — if not more,” Tufekci said. The months or even years spent discussing logistics, leafleting and building a coalition, she said, were crucial to the success of the civil rights movements. Other successful efforts, such as the Human Rights Campaign’s efforts to end the “don’t ask, don’t tell” policy against allowing gay people to serve openly in the military were also rooted in organization structures that had been developing and refining their demands for years to present a unified front. Movements organized over social networks often have more trouble jelling, she said, particularly if different factions air their differences on Facebook and Twitter, drawing attention to fractures in a movement….(More).”

Selected Readings on Algorithmic Scrutiny


By Prianka Srinivasan, Andrew Young and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.

Introduction

From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.

While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.

In what follows, we have curated several readings covering the impact of algorithms on:

  • Information Intermediaries;
  • Governance
  • Finance
  • Justice

In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.

Selected Reading List

Information Intermediaries

Governance

Consumer Finance

Justice

Tools & Process Toward Algorithmic Scrutiny

Annotated Selected Reading List

Information Intermediaries

Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.

  • This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
  • Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.

Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.

  • This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
  • It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
  • They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
  • The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.

Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo

  • This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
  • By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
  • It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.

  • This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
  • Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
  • According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.

  • This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
  • The paper goes on to map 6 dimensions of these public relevance algorithms:
    • Patterns of inclusion
    • Cycles of anticipation
    • The evaluation of relevance
    • The promise of algorithmic objectivity
    • Entanglement with practice
    • The production of calculated publics
  • The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly  recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.

Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.

  • This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
  • Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
    • Algorithms will continue to spread everywhere
    • Good things lie ahead
    • Humanity and human judgment are lost when data and predictive modeling become paramount
    • Biases exist in algorithmically-organized systems
    • Algorithmic categorizations deepen divides
    • Unemployment will rise; and
    • The need grows for algorithmic literacy, transparency and oversight

Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.

  • This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
  • Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
  • Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.

Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6

  • This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
  • Develops a conceptual map of 6 ethical concerns:
      • Inconclusive Evidence
      • Inscrutable Evidence
      • Misguided Evidence
      • Unfair Outcomes
      • Transformative Effects
    • Traceability
  • The paper then reviews existing literature, which together with the map creates a structure to inform future debate.

Governance

Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.

  • In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
  • The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
  • Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”

Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.

  • This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
  • By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
  • The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”

Consumer Finance

Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.

  • Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
  • The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.

Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch

  • Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
  • The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.

Justice

Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.

  • This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
  • However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”

Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.

  • This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.

Tools & Processes Toward Algorithmic Scrutiny

Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.

  • This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
  • By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
  • The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
  • The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.

Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.

  • This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
  • QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
  • Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.

Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanationarXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.

  • This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
  • The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
  • The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.

Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.

  • This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
  • The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
  • In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.

Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.

  • This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
  • The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
  • The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.

Using data and design to support people to stay in work


 at Civil Service Quarterly: “…Data and digital are fairly understandable concepts in policy-making. But design? Why is it one of the three Ds?

Policy Lab believes that design approaches are particularly suited to complex issues that have multiple causes and for which there is no one, simple answer. Design encourages people to think about the user’s needs (not just the organisation’s needs), brings in different perspectives to innovate new ideas, and then prototypes (mocks them up and tries them out) to iteratively improve ideas until they find one that can be scaled up.

Composite graph and segmentation analysis collection
Segmentation analysis of those who reported being on health-related benefits in the Understanding Society survey

Policy Lab also recognises that data alone cannot solve policy problems, and has been experimenting with how to combine numerical and more human practices. Data can explain what is happening, while design research methods – such as ethnography, observing people’s behaviours – can explain why things are happening. Data can be used to automate and tailor public services; while design means frontline delivery staff and citizens will actually know about and use them. Data-rich evidence is highly valued by policy-makers; and design can make it understandable and accessible to a wider group of people, opening up policy-making in the process.

The Lab is also experimenting with new data methods.

Data science can be used to look at complex, unstructured data (social media data, for example), in real time. Digital data, such as social media data or internet searches, can reveal how people behave (rather than how they say they behave). It can also look at huge amounts of data far quicker than humans, and find unexpected patterns hidden in the data. Powerful computers can identify trends from historical data and use these to predict what might happen in the future.

Supporting people in work project

The project took a DDD approach to generating insight and then creating ideas. The team (including the data science organisation Mastodon C and design agency Uscreates) used data science techniques together with ethnography to create a rich picture about what was happening. Then it used design methods to create ideas for digital services with the user in mind, and these were prototyped and tested with users.

The data science confirmed many of the known risk factors, but also revealed some new insights. It told us what was happening at scale, and the ethnography explained why.

  • The data science showed that people were more likely to go onto sickness benefits if they had been in the job a shorter time. The ethnography explained that the relationship with the line manager and a sense of loyalty were key factors in whether someone stayed in work or went onto benefits.
  • The data science showed that women with clinical depression were less likely to go onto sickness benefits than men with the same condition. The ethnography revealed how this played out in real life:
    • For example, Ella [not her real name], a teacher from London who had been battling with depression at work for a long time but felt unable to go to her boss about it. She said she was “relieved” when she got cancer, because she could talk to her boss about a physical condition and got time off to deal with both illnesses.
  • The data science also allowed the segmentation of groups of people who said they were on health-related benefits. Firstly, the clustering revealed that two groups had average health ratings, indicating that other non-health-related issues might be driving this. Secondly, it showed that these two groups were very different (one older group of men with previously high pay and working hours; the other of much younger men with previously low pay and working hours). The conclusion was that their motivations and needs to stay in work – and policy interventions – would be different.
  • The ethnography highlighted other issues that were not captured in the data but would be important in designing solutions, such as: a lack of shared information across the system; the need of the general practitioner (GP) to refer patients to other non-health services as well as providing a fit note; and the importance of coaching, confidence-building and planning….(More)”

Conceptualizing Big Social Data


Ekaterina Olshannikova, Thomas OlssonJukka Huhtamäki and Hannu Kärkkäinen in the Journal of Big Data: “The popularity of social media and computer-mediated communication has resulted in high-volume and highly semantic data about digital social interactions. This constantly accumulating data has been termed as Big Social Data or Social Big Data, and various visions about how to utilize that have been presented. However, as relatively new concepts, there are no solid and commonly agreed definitions of them. We argue that the emerging research field around these concepts would benefit from understanding about the very substance of the concept and the different viewpoints to it. With our review of earlier research, we highlight various perspectives to this multi-disciplinary field and point out conceptual gaps, the diversity of perspectives and lack of consensus in what Big Social Data means. Based on detailed analysis of related work and earlier conceptualizations, we propose a synthesized definition of the term, as well as outline the types of data that Big Social Data covers. With this, we aim to foster future research activities around this intriguing, yet untapped type of Big Data

https://static-content.springer.com/image/art%3A10.1186%2Fs40537-017-0063-x/MediaObjects/40537_2017_63_Fig1_HTML.gif

Conceptual map of various BSD/SBD interpretations in the related literature. This illustration depicts four main domains, which were studied by different researchers from various perspectives and intersections of science field/data types….(More)”.

 

 

Citizenship, Social Media, and Big Data


Homero Gil de Zúñiga and Trevor Diehl introducing Special Issue of the Social Science Computer Review: “This special issue of the Social Science Computer Review provides a sample of the latest strategies employing large data sets in social media and political communication research. The proliferation of information communication technologies, social media, and the Internet, alongside the ubiquity of high-performance computing and storage technologies, has ushered in the era of computational social science. However, in no way does the use of źbig dataź represent a standardized area of inquiry in any field. This article briefly summarizes pressing issues when employing big data for political communication research. Major challenges remain to ensure the validity and generalizability of findings. Strong theoretical arguments are still a central part of conducting meaningful research. In addition, ethical practices concerning how data are collected remain an area of open discussion. The article surveys studies that offer unique and creative ways to combine methods and introduce new tools while at the same time address some solutions to ethical questions. (See Table of Contents)”

Protecting One’s Own Privacy in a Big Data Economy


Anita L. Allen in the Harvard Law Review Forum: “Big Data is the vast quantities of information amenable to large-scale collection, storage, and analysis. Using such data, companies and researchers can deploy complex algorithms and artificial intelligence technologies to reveal otherwise unascertained patterns, links, behaviors, trends, identities, and practical knowledge. The information that comprises Big Data arises from government and business practices, consumer transactions, and the digital applications sometimes referred to as the “Internet of Things.” Individuals invisibly contribute to Big Data whenever they live digital lifestyles or otherwise participate in the digital economy, such as when they shop with a credit card, get treated at a hospital, apply for a job online, research a topic on Google, or post on Facebook.

Privacy advocates and civil libertarians say Big Data amounts to digital surveillance that potentially results in unwanted personal disclosures, identity theft, and discrimination in contexts such as employment, housing, and financial services. These advocates and activists say typical consumers and internet users do not understand the extent to which their activities generate data that is being collected, analyzed, and put to use for varied governmental and business purposes.

I have argued elsewhere that individuals have a moral obligation to respect not only other people’s privacy but also their own. Here, I wish to comment first on whether the notion that individuals have a moral obligation to protect their own information privacy is rendered utterly implausible by current and likely future Big Data practices; and on whether a conception of an ethical duty to self-help in the Big Data context may be more pragmatically framed as a duty to be part of collective actions encouraging business and government to adopt more robust privacy protections and data security measures….(More)”

The Emergence of a Post-Fact World


Francis Fukuyama in Project Syndicate: “One of the more striking developments of 2016 and its highly unusual politics was the emergence of a “post-fact” world, in which virtually all authoritative information sources were called into question and challenged by contrary facts of dubious quality and provenance.

The emergence of the Internet and the World Wide Web in the 1990s was greeted as a moment of liberation and a boon for democracy worldwide. Information constitutes a form of power, and to the extent that information was becoming cheaper and more accessible, democratic publics would be able to participate in domains from which they had been hitherto excluded.

The development of social media in the early 2000s appeared to accelerate this trend, permitting the mass mobilization that fueled various democratic “color revolutions” around the world, from Ukraine to Burma (Myanmar) to Egypt. In a world of peer-to-peer communication, the old gatekeepers of information, largely seen to be oppressive authoritarian states, could now be bypassed.

While there was some truth to this positive narrative, another, darker one was also taking shape. Those old authoritarian forces were responding in dialectical fashion, learning to control the Internet, as in China, with its tens of thousands of censors, or, as in Russia, by recruiting legions of trolls and unleashing bots to flood social media with bad information. These trends all came together in a hugely visible way during 2016, in ways that bridged foreign and domestic politics….

The traditional remedy for bad information, according to freedom-of-information advocates, is simply to put out good information, which in a marketplace of ideas will rise to the top. This solution, unfortunately, works much less well in a social-media world of trolls and bots. There are estimates that as many as a third to a quarter of Twitter users fall into this category. The Internet was supposed to liberate us from gatekeepers; and, indeed, information now comes at us from all possible sources, all with equal credibility. There is no reason to think that good information will win out over bad information….

The inability to agree on the most basic facts is the direct product of an across-the-board assault on democratic institutions – in the US, in Britain, and around the world. And this is where the democracies are headed for trouble. In the US, there has in fact been real institutional decay, whereby powerful interest groups have been able to protect themselves through a system of unlimited campaign finance. The primary locus of this decay is Congress, and the bad behavior is for the most part as legal as it is widespread. So ordinary people are right to be upset.

And yet, the US election campaign has shifted the ground to a general belief that everything has been rigged or politicized, and that outright bribery is rampant. If the election authorities certify that your favored candidate is not the victor, or if the other candidate seemed to perform better in a debate, it must be the result of an elaborate conspiracy by the other side to corrupt the outcome. The belief in the corruptibility of all institutions leads to a dead end of universal distrust. American democracy, all democracy, will not survive a lack of belief in the possibility of impartial institutions; instead, partisan political combat will come to pervade every aspect of life….(More)”

The Power of Networks: Six Principles That Connect Our Lives


Book by Christopher Brinton and Mung Chiang: “What makes WiFi faster at home than at a coffee shop? How does Google order search results? Why do Amazon, Netflix, and YouTube use fundamentally different rating and recommendation methods—and why does it matter? Is it really true that everyone on Facebook is connected in six steps or less? And how do cat videos—or anything else—go viral? The Power of Networks answers questions like these for the first time in a way that all of us can understand and use, whether at home, the office, or school. Using simple language, analogies, stories, hundreds of illustrations, and no more math than simple addition and multiplication, Christopher Brinton and Mung Chiang provide a smart but accessible introduction to the handful of big ideas that drive the technical and social networks we use every day—from cellular phone networks and cloud computing to the Internet and social media platforms.

The Power of Networks unifies these ideas through six fundamental principles of networking, which explain the difficulties in sharing network resources efficiently, how crowds can be wise or not so wise depending on the nature of their connections, how there are many building-blocks of layers in a network, and more. Understanding these simple ideas unlocks the workings of everything from the connections we make on Facebook to the technology that runs such platforms. Along the way, the authors also talk with and share the special insights of renowned experts such as Google’s Eric Schmidt, former Verizon Wireless CEO Dennis Strigl, and “fathers of the Internet” Vint Cerf and Bob Kahn….(More)”

#Republic: Divided Democracy in the Age of Social Media


Book by Cass Sunstein: “As the Internet grows more sophisticated, it is creating new threats to democracy. Social media companies such as Facebook can sort us ever more efficiently into groups of the like-minded, creating echo chambers that amplify our views. It’s no accident that on some occasions, people of different political views cannot even understand each other. It’s also no surprise that terrorist groups have been able to exploit social media to deadly effect.

Welcome to the age of #Republic.

In this revealing book, Cass Sunstein, the New York Times bestselling author of Nudge and The World According to Star Wars, shows how today’s Internet is driving political fragmentation, polarization, and even extremism—and what can be done about it.

Thoroughly rethinking the critical relationship between democracy and the Internet, Sunstein describes how the online world creates “cybercascades,” exploits “confirmation bias,” and assists “polarization entrepreneurs.” And he explains why online fragmentation endangers the shared conversations, experiences, and understandings that are the lifeblood of democracy.

In response, Sunstein proposes practical and legal changes to make the Internet friendlier to democratic deliberation. These changes would get us out of our information cocoons by increasing the frequency of unchosen, unplanned encounters and exposing us to people, places, things, and ideas that we would never have picked for our Twitter feed.

#Republic need not be an ironic term. As Sunstein shows, it can be a rallying cry for the kind of democracy that citizens of diverse societies most need….(More)”