The Internet’s Loop of Action and Reaction Is Worsening


Farhad Manjoo in the New York Times: “Donald J. Trump and Hillary Clinton said this week that we should think about shutting down parts of the Internet to stop terrorist groups from inspiring and recruiting followers in distant lands. Mr. Trump even suggested an expert who’d be perfect for the job: “We have to go see Bill Gates and a lot of different people that really understand what’s happening, and we have to talk to them — maybe, in certain areas, closing that Internet up in some way,” he said on Monday in South Carolina.

Many online responded to Mr. Trump and Mrs. Clinton with jeers, pointing out both constitutional and technical limits to their plans. Mr. Gates, the Microsoft co-founder who now spends much of his time on philanthropy, has as much power to close down the Internet as he does to fix Mr. Trump’s hair.

Yet I had a different reaction to Mr. Trump and Mrs. Clinton’s fantasy of a world in which you could just shut down parts of the Internet that you didn’t like: Sure, it’s impossible, but just imagine if we could do it, just for a bit. Wouldn’t it have been kind of a pleasant dream world, in these overheated last few weeks, to have lived free of social media?

Hear me out. If you’ve logged on to Twitter and Facebook in the waning weeks of 2015, you’ve surely noticed that the Internet now seems to be on constant boil. Your social feed has always been loud, shrill, reflexive and ugly, but this year everything has been turned up to 11. The Islamic State’s use of the Internet is perhaps only the most dangerous manifestation of what, this year, became an inescapable fact of online life: The extremists of all stripes are ascendant, and just about everywhere you look, much of the Internet is terrible.“The academic in me says that discourse norms have shifted,” said Susan Benesch, a faculty associate at Harvard’s Berkman Center for Internet & Society and the director of the Dangerous Speech Project, an effort to study speech that leads to violence. “It’s become so common to figuratively walk through garbage and violent imagery online that people have accepted it in a way. And it’s become so noisy that you have to shout more loudly, and more shockingly, to be heard.”

You might argue that the angst online is merely a reflection of the news. Terrorism, intractable warfare, mass shootings, a hyperpartisan presidential race, police brutality, institutional racism and the protests over it have dominated the headlines. It’s only natural that the Internet would get a little out of control over that barrage.

But there’s also a way in which social networks seem to be feeding a cycle of action and reaction. In just about every news event, the Internet’s reaction to the situation becomes a follow-on part of the story, so that much of the media establishment becomes trapped in escalating, infinite loops of 140-character, knee-jerk insta-reaction.

“Presidential elections have always been pretty nasty, but these days the mudslinging is omnipresent in a way that’s never been the case before,” said Whitney Phillips, an assistant professor of literary studies and writing at Mercer University, who is the author of “This Is Why We Can’t Have Nice Things,” a study of online “trolling.” “When Donald Trump says something that I would consider insane, it’s not just that it gets reported on by one or two or three outlets, but it becomes this wave of iterative content on top of content on top of content in your feed, taking over everything you see.”

The spiraling feedback loop is exhausting and rarely illuminating. The news brims with instantly produced “hot takes” and a raft of fact-free assertions. Everyone — yours truly included — is always on guard for the next opportunity to meme-ify outrage: What crazy thing did Trump/Obama/The New York Times/The New York Post/Rush Limbaugh/etc. say now, and what clever quip can you fit into a tweet to quickly begin collecting likes?

There is little room for indulging nuance, complexity, or flirting with the middle ground. In every issue, you are either with one aggrieved group or the other, and the more stridently you can express your disdain — short ofhurling profanities at the president on TV, which will earn you a brief suspension — the better reaction you’ll get….(More)”

What Privacy Papers Should Policymakers be Reading in 2016?


Stacy Gray at the Future of Privacy Forum: “Each year, FPF invites privacy scholars and authors to submit articles and papers to be considered by members of our Advisory Board, with an aim toward showcasing those articles that should inform any conversation about privacy among policymakers in Congress, as well as at the Federal Trade Commission and in other government agencies. For our sixth annual Privacy Papers for Policymakers, we received submissions on topics ranging from mobile app privacy, to location tracking, to drone policy.

Our Advisory Board selected papers that describe the challenges and best practices of designing privacy notices, ways to minimize the risks of re-identification of data by focusing on process-based data release policy and taking a precautionary approach to data release, the relationship between privacy and markets, and bringing the concept of trust more strongly into privacy principles.

Our top privacy papers for 2015 are, in alphabetical order:
Florian Schaub, Rebecca Balebako, Adam L. Durity, and Lorrie Faith Cranor
Ira S. Rubinstein and Woodrow Hartzog
Arvind Narayanan, Joanna Huey, and Edward W. Felten
Ryan Calo
Neil Richards and Woodrow Hartzog
Our two papers selected for Notable Mention are:
Peter Swire (Testimony, Senate Judiciary Committee Hearing, July 8, 2015)
Joel R. Reidenberg
….(More)”

Five Studies: How Behavioral Science Can Help in International Development


 in Pacific Standard: “In 2012, there were 896 million people around the world—12.7 percent of the global population—living on less than two dollars a day. The World Food Programestimates that 795 million people worldwide don’t have enough food to “lead a healthy life”; 25 percent of people living in Sub-Saharan Africa are undernourished. Over three million children die every year thanks to poor nutrition, and hunger is the leading cause of death worldwide. In 2012, just three preventable diseases (pneumonia, diarrhea, and malaria) killed 4,600 children every day.

Last month, the World Bank announced the launch of the Global Insights Initiative (GINI). The initiative, which follows in the footsteps of so-called “nudge units” in the United Kingdom and United States, is the Bank’s effort to incorporate insights from the field of behavioral science into the design of international development programs; too often, those programs failed to account for how people behave in the real world. Development policy, according to the Bank’s 2015 World Development Report, is overdue for a “redesign based on careful consideration of human factors.” Researchers have applauded the announcement, but it raises an interesting question: What can nudges really accomplish in the face of the developing world’s overwhelming poverty and health-care deficits?

In fact, researchers have found that instituting small program changes, informed by a better understanding of people’s motivations and limitations, can have big effects on everything from savings rates to vaccination rates to risky sexual behavior. Here are five studies that demonstrate the benefits of bringing empirical social science into the developing world….(More)”

Crowdvoice: tracking voices of protest


CrowdVoice.org is an open source service that tracks voices of protest by curating and contextualizing valuable data, such as eyewitness videos, photos, and reports as a means to facilitate awareness regarding current social justice movements worldwide.

Despite what is happening today around the world, little hard-research exists for journalists and academics in terms of archives of diverse media reports. CrowdVoice addresses this by curating a wide range of content pulled from across the web on a dedicated page. Content is initially housed in a moderation queue, where it awaits crowdsourced verification.

As a second step, the platform visually communicates social and political issues through hard facts, statistics, interactive infographics, and timelines, which are organized by tags, specific topics, and relevant citations. These engaging educational resources direct users to an archive of relevant footage and reports. Infographics and interactive timelines are an imperative step in providing a frame of reference to the hundreds of under-reported stories pouring in from across the world, and are crucial to putting together nuanced, comprehensive reports reflecting both hard facts and the human face of the issues.”

Why: A Guide to Finding and Using Causes


Book by : “Can drinking coffee help people live longer? What makes a stock’s price go up? Why did you get the flu? Causal questions like these arise on a regular basis, but most people likely have not thought deeply about how to answer them.

This book helps you think about causality in a structured way: What is a cause, what are causes good for, and what is compelling evidence of causality? Author Samantha Kleinberg shows you how to develop a set of tools for thinking more critically about causes. You’ll learn how to question claims, identify causes, make decisions based on causal information, and verify causes through further tests.

Whether it’s figuring out what data you need, or understanding that the way you collect and prepare data affects the conclusions you can draw from it, Why will help you sharpen your causal inference skills….(More)”

When to Punish, When to Persuade and When to Reward: Strengthening Responsive Regulation with the Regulatory Diamond


Paper by Jonathan Kolieb: “Originally published over two decades ago, ‘responsive regulation’ and its associated regulatory pyramid have become touchstones in the contemporary study and practice of regulation. Influential ideas and theories about regulation and governance have been developed in the intervening years, yet responsive regulation’s simple pyramidal model continues to resonate with policy-makers and scholars alike. This article seeks to advance the vision and utility of responsive regulation, by responding to several key drawbacks of the original design and by offering an update to the pyramidal model of regulation that lies at the centre of the theory. It argues for a ‘regulatory diamond’ as a strengthened, renewed model for responsive regulation. Rooted within the responsive regulation literature, the regulatory diamond integrates into the one schema both ‘compliance regulation’ and ‘aspirational regulation’, thereby offering a more cohesive representation of the broad conception of regulation that underpins responsive regulation theory, and the limited but vital role of law within it….(More)”

Join Campaigns, Shop Ethically, Hit the Man Where It Hurts—All Within an App


PSFK: “Buycott is an app that wants to make shopping ethically a little easier. Join campaigns to support causes you care about, scan barcodes on products you’re considering to view company practices, and voice your concerns directly through the free app, available for iOS and Android.

buycott-app-1-psfk

Ethical campaigns are crowdsourced, and range from environmental to political and social concerns. Current campaigns include a demand for GMO labeling, supporting fair trade, ending animal testing, and more. You can read a description of the issue in question and see a list of companies to avoid and support under each campaign.

Scan barcodes of potential purchases to see if it the parent companies behind them hold up to your ethical standards. If the company doesn’t stand up, the app will suggest more ethically aligned alternatives.

buycott-app-2-psfk

According to Ivan Pardo, founder and CEO of Buycott, the app is designed to help consumers make informed shopping decisions “they can feel good about.”

“As consumers become increasingly conscientious about the impact of their purchases and shop to reflect these principles, Buycott provides users with transparency into the business practices of the companies marketing to them.”

Users can contact problematic companies through the app, using email, Facebook or Twitter. The app traces each product all the way back to its umbrella or parent company (which means the same few corporate giants are likely to show up on a few do-not buy lists)….(More)

State of the Commons


Creative Commons: “Creative Commoners have known all along that collaboration, sharing, and cooperation are a driving force for human evolution. And so for many it will come as no surprise that in 2015 we achieved a tremendous milestone: over 1.1 billion CC licensed photos, videos, audio tracks, educational materials, research articles, and more have now been contributed to the shared global commons…..

Whether it’s open education, open data, science, research, music, video, photography, or public policy, we are putting sharing and collaboration at the heart of the Web. In doing so, we are much closer to realizing our vision: unlocking the full potential of the Internet to drive a new era of development, growth, and productivity.

I am proud to share with you our 2015 State of the Commons report, our best effort to measure the immeasurable scope of the commons by looking at the CC licensed content, along with content marked as public domain, that comprise the slice of the commons powered by CC tools. We are proud to be a leader in the commons movement, and we hope you will join us as we celebrate all we have accomplished together this year. ….Report at https://stateof.creativecommons.org/2015/”

Crowdsourcing Apps to Report Bay Area Public Transportation Delays


Carolyn Said in the San Francisco Chronicle: “It’s the daily lament of the public transit rider: When will the bus show up?

The NextBus system is supposed to answer that for Muni riders. It displays anticipated arrival times through electronic signs in bus shelters, with a phone service for people who call 511, on a website and on a smartphone app, harvesting information from GPS devices in Muni’s fleet. Now a study by a San Francisco startup says it’s accurate about 70 percent of the time, with the worst performance during commute hours.

The researchers have their own plan to improve accuracy: They created a crowdsourced iOS app called Swyft. Some 40,000 Bay Area residents, about three-quarters of them in San Francisco, now use the app to report when their Muni bus, BART train or AC Transit bus is delayed, overcrowded or otherwise experiencing problems. That lets the app deliver real-time information to its users in conjunction with the NextBus predictions.

“The union of those two provides better context for riders” to figure out when their bus really will arrive, said Jonathan Simkin, co-founder and CEO of Swyft, which has raised a little over $500,000. “We built Swyft to optimize how you get around town.” Swyft has been tested since January in the Bay Area. An Android version is coming soon.

An app for iOS and Android called Moovit also uses crowdsourcing combined with transit information to predict bus or train arrivals. Moovit, released in 2012, now has 35 million users in more than 800 cities in 60 countries, giving it a bigger user base than Google Maps, it said. The company couldn’t say how many users it has in San Francisco. The Israeli company has more than $81 million in venture backing.
When users ride public transit with the Moovit app open, it anonymously tracks their speed and location, and integrates that with schedules to predict when a bus will arrive. It also lets users report problems such as how crowded or clean a vehicle is, for instance….(More)”

Peering at Open Peer Review


at the Political Methodologist: “Peer review is an essential part of the modern scientific process. Sending manuscripts for others to scrutinize is such a widespread practice in academia that its importance cannot be overstated. Since the late eighteenth century, when the Philosophical Transactions of the Royal Society pioneered editorial review,1 virtually every scholarly outlet has adopted some sort of pre-publication assessment of received works. Although the specifics may vary, the procedure has remained largely the same since its inception: submit, receive anonymous criticism, revise, restart the process if required. A recent survey of APSA members indicates that political scientists overwhelmingly believe in the value of peer review (95%) and the vast majority of them (80%) think peer review is a useful tool to keep themselves up to date with cutting-edge research (Djupe 2015, 349). But do these figures suggest that journal editors can rest upon their laurels and leave the system as it is?

Not quite. A number of studies have been written about the shortcomings of peer review. The system has been criticised for being too slow (Ware 2008), conservative (Eisenhart 2002), inconsistent (Smith 2006; Hojat, Gonnella, and Caelleigh 2003), nepotist (Sandström and Hällsten 2008), biased against women (Wold and Wennerås 1997), affiliation (Peters and Ceci 1982), nationality (Ernst and Kienbacher 1991) and language (Ross et al. 2006). These complaints have fostered interesting academic debates (e.g. Meadows 1998; Weller 2001), but thus far the literature offers little practical advice on how to tackle peer review problems. One often overlooked aspect in these discussions is how to provide incentives for reviewers to write well-balanced reports. On the one hand, it is not uncommon for reviewers to feel that their work is burdensome and not properly acknowledged. Further, due to the anonymous nature of the reviewing process itself, it is impossible to give the referee proper credit for a constructive report. On the other hand, the reviewers’ right to full anonymity may lead to sub-optimal outcomes as referees can rarely be held accountable for being judgmental (Fabiato 1994).

Open peer review (henceforth OPR) is largely in line with this trend towards a more transparent political science. Several definitions of OPR have been suggested, including more radical ones such as allowing anyone to write pre-publication reviews (crowdsourcing) or by fully replacing peer review with post-publication comments (Ford 2013). However, I believe that by adopting a narrow definition of OPR – only asking referees to sign their reports – we can better accommodate positive aspects of traditional peer review, such as author blinding, into an open framework. Hence, in this text OPR is understood as a reviewing method where both referee information and their reports are disclosed to the public, while the authors’ identities are not known to the reviewers before manuscript publication.

How exactly would OPR increase transparency in political science? As noted by a number of articles on the topic, OPR creates incentives for referees to write insightful reports, or at least it has no adverse impact over the quality of reviews (DeCoursey 2006; Godlee 2002; Groves 2010; Pöschl 2012; Shanahan and Olsen 2014). In a study that used randomized trials to assess the effect of OPR in the British Journal of Psychiatry, Walsh et al. (2000) show that “signed reviews were of higher quality, were more courteous and took longer to complete than unsigned reviews.” Similar results were reported by McNutt et al. (1990, 1374), who affirm that “editors graded signers as more constructive and courteous […], [and] authors graded signers as fairer.” In the same vein, Kowalczuk et al. (2013) measured the difference in review quality in BMC Microbiology and BMC Infectious Diseases and stated that signers received higher ratings for their feedback on methods and for the amount of evidence they mobilised to substantiate their decisions. Van Rooyen and her colleagues ((1999; 2010)) also ran two randomized studies on the subject, and although they did not find a major difference in perceived quality of both types of review, they reported that reviewers in the treatment group also took significantly more time to evaluate the manuscripts in comparison with the control group. They also note authors broadly favored the open system against closed peer review.

Another advantage of OPR is that it offers a clear way for referees to highlight their specialized knowledge. When reviews are signed, referees are able to receive credit for their important, yet virtually unsung, academic contributions. Instead of just having a rather vague “service to profession” section in their CVs, referees can precise information about the topics they are knowledgeable about and which sort of advice they are giving to prospective authors. Moreover, reports assigned a DOI number can be shared as any other piece of scholarly work, which leads to an increase in the body of knowledge of our discipline and a higher number of citations to referees. In this sense, signed reviews can also be useful for universities and funding bodies. It is an additional method to assess the expert knowledge of a prospective candidate. As supervising skills are somewhat difficult to measure, signed reviews are a good proxy for an applicant’s teaching abilities.

OPR provides background to manuscripts at the time of publication (Ford 2015; Lipworth et al. 2011). It is not uncommon for a manuscript to take months, or even years, to be published in a peer-reviewed journal. In the meantime, the text usually undergoes several major revisions, but readers rarely, if ever, see this trial-and-error approach in action. With public reviews, everyone would be able to track the changes made in the original manuscript and understand how the referees improved the text before its final version. Hence, OPR makes the scientific exchange clear, provides useful background information to manuscripts and fosters post-publication discussions by the readership at large.

Signed and public reviews are also important pedagogical tools. OPR gives a rare glimpse of how academic research is actually conducted, making explicit the usual need for multiple iterations between the authors and the editors before an article appears in print. Furthermore, OPR can fill some of the gap in peer-review training for graduate students. OPR allows junior scholars to compare different review styles, understand what the current empirical or theoretical puzzles of their discipline are, and engage in post-publication discussions about topics in which they are interested (Ford 2015; Lipworth et al. 2011)….(More)”