Gaza and the Future of Information Warfare


Article by P. W. Singer and Emerson T. Brooking: “The Israel-Hamas war began in the early hours of Saturday, October 7, when Hamas militants and their affiliates stole over the Gazan-Israeli border by tunnel, truck, and hang glider, killed 1,200 people, and abducted over 200 more. Within minutes, graphic imagery and bombastic propaganda began to flood social media platforms. Each shocking video or post from the ground drew new pairs of eyes, sparked horrified reactions around the world, and created demand for more. A second front in the war had been opened online, transforming physical battles covering a few square miles into a globe-spanning information conflict.

In the days that followed, Israel launched its own bloody retaliation against Hamas; its bombardment of cities in the Gaza Strip killed more than 10,000 Palestinians in the first month. With a ground invasion in late October, Israeli forces began to take control of Gazan territory. The virtual battle lines, meanwhile, only became more firmly entrenched. Digital partisans clashed across Facebook, Instagram, X, TikTok, YouTube, Telegram, and other social media platforms, each side battling to be the only one heard and believed, unshakably committed to the righteousness of its own cause.

The physical and digital battlefields are now merged. In modern war, smartphones and cameras transmit accounts of nearly every military action across the global information space. The debates they spur, in turn, affect the real world. They shape public opinion, provide vast amounts of intelligence to actors around the world, and even influence diplomatic and military operational decisions at both the strategic and tactical levels. In our 2018 book, we dubbed this phenomenon “LikeWar,” defined as a political and military competition for command of attention. If cyberwar is the hacking of online networks, LikeWar is the hacking of the people on them, using their likes and shares to make a preferred narrative go viral…(More)”.

Internet use does not appear to harm mental health, study finds


Tim Bradshaw at the Financial Times: “A study of more than 2mn people’s internet use found no “smoking gun” for widespread harm to mental health from online activities such as browsing social media and gaming, despite widely claimed concerns that mobile apps can cause depression and anxiety.

Researchers at the Oxford Internet Institute, who said their study was the largest of its kind, said they found no evidence to support “popular ideas that certain groups are more at risk” from the technology.

However, Andrew Przybylski, professor at the institute — part of the University of Oxford — said that the data necessary to establish a causal connection was “absent” without more co-operation from tech companies. If apps do harm mental health, only the companies that build them have the user data that could prove it, he said.

“The best data we have available suggests that there is not a global link between these factors,” said Przybylski, who carried out the study with Matti Vuorre, a professor at Tilburg University. Because the “stakes are so high” if online activity really did lead to mental health problems, any regulation aimed at addressing it should be based on much more “conclusive” evidence, he added.

“Global Well-Being and Mental Health in the Internet Age” was published in the journal Clinical Psychological Science on Tuesday. 

In their paper, Przybylski and Vuorre studied data on psychological wellbeing from 2.4mn people aged 15 to 89 in 168 countries between 2005 and 2022, which they contrasted with industry data about growth in internet subscriptions over that time, as well as tracking associations between mental health and internet adoption in 202 countries from 2000-19.

“Our results do not provide evidence supporting the view that the internet and technologies enabled by it, such as smartphones with internet access, are actively promoting or harming either wellbeing or mental health globally,” they concluded. While there was “some evidence” of greater associations between mental health problems and technology among younger people, these “appeared small in magnitude”, they added.

The report contrasts with a growing body of research in recent years that has connected the beginning of the smartphone era, around 2010, with growing rates of anxiety and depression, especially among teenage girls. Studies have suggested that reducing time on social media can benefit mental health, while those who spend the longest online are at greater risk of harm…(More)”.

The Potentially Adverse Impact of Twitter 2.0 on Scientific and Research Communication


Article by Julia Cohen: “In just over a month after the change in Twitter leadership, there have been significant changes to the social media platform, in its new “Twitter 2.0.” version. For researchers who use Twitter as a primary source of data, including many of the computer scientists at USC’s Information Sciences Institute (ISI), the effects could be debilitating…

Over the years, Twitter has been extremely friendly to researchers, providing and maintaining a robust API (application programming interface) specifically for academic research. The Twitter API for Academic Research allows researchers with specific objectives who are affiliated with an academic institution to gather historical and real-time data sets of tweets, and related metadata, at no cost. Currently, the Twitter API for Academic Research continues to be functional and maintained in Twitter 2.0.

The data obtained from the API provides a means to observe public conversations and understand people’s opinions about societal issues. Luca Luceri, a Postdoctoral Research Associate at ISI called Twitter “a primary platform to observe online discussion tied to political and social issues.” And Twitter touts its API for Academic Research as a way for “academic researchers to use data from the public conversation to study topics as diverse as the conversation on Twitter itself.”

However, if people continue deactivating their Twitter accounts, which appears to be the case, the makeup of the user base will change, with data sets and related studies proportionally affected. This is especially true if the user base evolves in a way that makes it more ideologically homogeneous and less diverse.

According to MIT Technology Review, in the first week after its transition, Twitter may have lost one million users, which translates to a 208% increase in lost accounts. And there’s also the concern that the site could not work as effectively, because of the substantial decrease in the size of the engineering teams. This includes concerns about the durability of the service researchers rely on for data, namely the Twitter API. Jason Baumgartner, founder of Pushshift, a social media data collection, analysis, and archiving platform, said in several recent API requests, his team also saw a significant increase in error rates – in the 25-30% range –when they typically see rates near 1%. Though for now this is anecdotal, it leaves researchers wondering if they will be able to rely on Twitter data for future research.

One example of how the makeup of the less-regulated Twitter 2.0 user base could significantly be altered is if marginalized groups leave Twitter at a higher rate than the general user base, e.g. due to increased hate speech. Keith Burghardt, a Computer Scientist at ISI who studies hate speech online said, “It’s not that an underregulated social media changes people’s opinions, but it just makes people much more vocal. So you will probably see a lot more content that is hateful.” In fact, a study by Montclair State University found that hate speech on Twitter skyrocketed in the week after the acquisition of Twitter….(More)”.

Learning to Share: Lessons on Data-Sharing from Beyond Social Media


Paper by CDT: “What role has social media played in society? Did it influence the rise of Trumpism in the U.S. and the passage of Brexit in the UK? What about the way authoritarians exercise power in India or China? Has social media undermined teenage mental health? What about its role in building social and community capital, promoting economic development, and so on?

To answer these and other important policy-related questions, researchers such as academics, journalists, and others need access to data from social media companies. However, this data is generally not available to researchers outside of social media companies and, where it is available, it is often insufficient, meaning that we are left with incomplete answers.

Governments on both sides of the Atlantic have passed or proposed legislation to address the problem by requiring social media companies to provide certain data to vetted researchers (Vogus, 2022a). Researchers themselves have thought a lot about the problem, including the specific types of data that can further public interest research, how researchers should be vetted, and the mechanisms companies can use to provide data (Vogus, 2022b).

For their part, social media companies have sanctioned some methods to share data to certain types of researchers through APIs (e.g., for researchers with university affiliations) and with certain limitations (such as limits on how much and what types of data are available). In general, these efforts have been insufficient. In part, this is due to legitimate concerns such as the need to protect user privacy or to avoid revealing company trade secrets.  But, in some cases, the lack of sharing is due to other factors such as lack of resources or knowledge about how to share data effectively or resistance to independent scrutiny.

The problem is complex but not intractable. In this report, we look to other industries where companies share data with researchers through different mechanisms while also addressing concerns around privacy. In doing so, our analysis contributes to current public and corporate discussions about how to safely and effectively share social media data with researchers. We review experiences based on the governance of clinical trials, electricity smart meters, and environmental impact data…(More)”

A Massive LinkedIn Study Reveals Who Actually Helps You Get That Job


Article by Viviane Callier : “If you want a new job, don’t just rely on friends or family. According to one of the most influential theories in social science, you’re more likely to nab a new position through your “weak ties,” loose acquaintances with whom you have few mutual connections. Sociologist Mark Granovetter first laid out this idea in a 1973 paper that has garnered more than 65,000 citations. But the theory, dubbed “the strength of weak ties,” after the title of Granovetter’s study, lacked causal evidence for decades. Now a sweeping study that looked at more than 20 million people on the professional social networking site LinkedIn over a five-year period finally shows that forging weak ties does indeed help people get new jobs. And it reveals which types of connections are most important for job hunters…Along with job seekers, policy makers could also learn from the new paper. “One thing the study highlights is the degree to which algorithms are guiding fundamental, baseline, important outcomes, like employment and unemployment,” Aral says. The role that LinkedIn’s People You May Know function plays in gaining a new job demonstrates “the tremendous leverage that algorithms have on employment and probably other factors of the economy as well.” It also suggests that such algorithms could create bellwethers for economic changes: in the same way that the Federal Reserve looks at the Consumer Price Index to decide whether to hike interest rates, Aral suggests, networks such as LinkedIn might provide new data sources to help policy makers parse what is happening in the economy. “I think these digital platforms are going to be an important source of that,” he says…(More)”

A Prehistory of Social Media


Essay by Kevin Driscoll: “Over the past few years, I’ve asked dozens of college students to write down, in a sentence or two, where the internet came from. Year after year, they recount the same stories about the US government, Silicon Valley, the military, and the threat of nuclear war. A few students mention the Department of Defense’s ARPANET by name. Several get the chronology wrong, placing the World Wide Web before the internet or expressing confusion about the invention of email. Others mention “tech wizards” or “geniuses” from Silicon Valley firms and university labs. No fewer than four students have simply written, “Bill Gates.”

Despite the internet’s staggering scale and global reach, its folk histories are surprisingly narrow. This mismatch reflects the uncertain definition of “the internet.” When nonexperts look for internet origin stories, they want to know about the internet as they know it, the internet they carry around in their pockets, the internet they turn to, day after day. Yet the internet of today is not a stable object with a single, coherent history. It is a dynamic socio-technical phenomenon that came into being during the 1990s, at the intersection of hundreds of regional, national, commercial, and cooperative networks—only one of which was previously known as “the internet.” In short, the best-known histories describe an internet that hasn’t existed since 1994. So why do my students continue to repeat stories from 25 years ago? Why haven’t our histories kept up?

The standard account of internet history took shape in the early 1990s, as a mixture of commercial online services, university networks, and local community networks mutated into something bigger, more commercial, and more accessible to the general public. As hype began to build around the “information superhighway,” people wanted a backstory. In countless magazines, TV news reports, and how-to books, the origin of the internet was traced back to ARPANET, the computer network created by the Advanced Research Projects Agency during the Cold War. This founding mythology has become a resource for advancing arguments on issues related to censorship, national sovereignty, cybersecurity, privacy, net neutrality, copyright, and more. But with only this narrow history of the early internet to rely on, the arguments put forth are similarly impoverished…(More)”.

The Effectiveness of Digital Interventions on COVID-19 Attitudes and Beliefs


Paper by Susan Athey, Kristen Grabarz, Michael Luca & Nils C. Wernerfelt: “During the course of the COVID-19 pandemic, a common strategy for public health organizations around the world has been to launch interventions via advertising campaigns on social media. Despite this ubiquity, little has been known about their average effectiveness. We conduct a large-scale program evaluation of campaigns from 174 public health organizations on Facebook and Instagram that collectively reached 2.1 billion individuals and cost around $40 million. We report the results of 819 randomized experiments that measured the impact of these campaigns across standardized, survey-based outcomes. We find on average these campaigns are effective at influencing self-reported beliefs, shifting opinions close to 1% at baseline with a cost per influenced person of about $3.41. There is further evidence that campaigns are especially effective at influencing users’ knowledge of how to get vaccines. Our results represent, to the best of our knowledge, the largest set of online public health interventions analyzed to date…(More)”

EU and US legislation seek to open up digital platform data


Article by Brandie Nonnecke and Camille Carlton: “Despite the potential societal benefits of granting independent researchers access to digital platform data, such as promotion of transparency and accountability, online platform companies have few legal obligations to do so and potentially stronger business incentives not to. Without legally binding mechanisms that provide greater clarity on what and how data can be shared with independent researchers in privacy-preserving ways, platforms are unlikely to share the breadth of data necessary for robust scientific inquiry and public oversight.

Here, we discuss two notable, legislative efforts aimed at opening up platform data: the Digital Services Act (DSA), recently approved by the European Parliament, and the Platform Accountability and Transparency Act (PATA), recently proposed by several US senators. Although the legislation could support researchers’ access to data, they could also fall short in many ways, highlighting the complex challenges in mandating data access for independent research and oversight.

As large platforms take on increasingly influential roles in our online social, economic, and political interactions, there is a growing demand for transparency and accountability through mandated data disclosures. Research insights from platform data can help, for example, to understand unintended harms of platform use on vulnerable populations, such as children and marginalized communities; identify coordinated foreign influence campaigns targeting elections; and support public health initiatives, such as documenting the spread of antivaccine mis-and disinformation…(More)”.

Law Enforcement and Technology: Using Social Media


Congressional Research Service Report: “As the ways in which individuals interact continue to evolve, social media has had an increasing role in facilitating communication and the sharing of content online—including moderated and unmoderated, user-generated content. Over 70% of U.S. adults are estimated to have used social media in 2021. Law enforcement has also turned to social media to help in its operations. Broadly, law enforcement relies on social media as a tool for information sharing as well as for gathering information to assist in investigations.


Social Media as a Communications Tool. Social media is one of many tools law enforcement can use to connect with the community. They may use it, for instance, to push out bulletins on wanted persons and establish tip lines to crowdsource potential investigative leads. It provides degrees of speed and reach unmatched by many other forms of communication law enforcement can use to connect with the public. Officials and researchers have highlighted social media as a tool that, if used properly, can enhance community policing.

Social Media and Investigations. Social media is one tool in agencies’ investigative toolkits to help establish investigative leads and assemble evidence on potential suspects. There are no federal laws that specifically govern law enforcement agencies’ use of information obtained from social media sites, but their ability to obtain or use certain information may be influenced by social media companies’ policies as well as law enforcement agencies’ own social media policies and the rules of criminal procedure. When individuals post content on social media platforms without audience restrictions, anyone— including law enforcement—can access this content without court authorization. However, some information that individuals post on social media may be restricted—by user choice or platform policies—in the scope of audience that may access it. In the instances where law enforcement does not have public access to information, they may rely on a number of tools and techniques, such as informants or undercover operations, to gain access to it. Law enforcement may also require social media platforms to provide access to certain restricted information through a warrant, subpoena, or other court order.

Social Media and Intelligence Gathering. The use of social media to gather intelligence has generated particular interest from policymakers, analysts, and the public. Social media companies have weighed in on the issue of social media monitoring by law enforcement, and some platforms have modified their policies to expressly prohibit their user data from being used by law enforcement to monitor social media. Law enforcement agencies themselves have reportedly grappled with the extent to which they should gather and rely on information and intelligence gleaned from social media. For instance, some observers have suggested that agencies may be reluctant to regularly analyze public social media posts because that could be viewed as spying on the American public and could subsequently chill free speech protected under the First Amendment…(More)”.

The Crowdsourced Panopticon


Book by Jeremy Weissman: “Behind the omnipresent screens of our laptops and smartphones, a digitally networked public has quickly grown larger than the population of any nation on Earth. On the flipside, in front of the ubiquitous recording devices that saturate our lives, individuals are hyper-exposed through a worldwide online broadcast that encourages the public to watch, judge, rate, and rank people’s lives. The interplay of these two forces – the invisibility of the anonymous crowd and the exposure of the individual before that crowd – is a central focus of this book. Informed by critiques of conformity and mass media by some of the greatest philosophers of the past two centuries, as well as by a wide range of historical and empirical studies, Weissman helps shed light on what may happen when our lives are increasingly broadcast online for everyone all the time, to be judged by the global community…(More)”.