Henry Farrell and Abraham L. Newman in International Security: “Liberals claim that globalization has led to fragmentation and decentralized networks of power relations. This does not explain how states increasingly “weaponize interdependence” by leveraging global networks of informational and financial exchange for strategic advantage. The theoretical literature on network topography shows how standard models predict that many networks grow asymmetrically so that some nodes are far more connected than others. This model nicely describes several key global economic networks, centering on the United States and a few other states. Highly asymmetric networks allow states with (1) effective jurisdiction over the central economic nodes and (2) appropriate domestic institutions and norms to weaponize these structural advantages for coercive ends. In particular, two mechanisms can be identified. First, states can employ the “panopticon effect” to gather strategically valuable information. Second, they can employ the “chokepoint effect” to deny network access to adversaries. Tests of the plausibility of these arguments across two extended case studies that provide variation both in the extent of U.S. jurisdiction and in the presence of domestic institutions—the SWIFT financial messaging system and the internet—confirm the framework’s expectations. A better understanding of the policy implications of the use and potential overuse of these tools, as well as the response strategies of targeted states, will recast scholarly debates on the relationship between economic globalization and state coercion….(More)”
Community Colleges Boost STEM Student Success Through Behavioral Nudging
Press Release: “JFF, a national nonprofit driving transformation in the American workforce and education systems, and Persistence Plus, which pairs behavioral insights with intelligent text messaging to improve student success, today released the findings from an analysis that examined the effects of personalized nudging on nearly 10,000 community college students. The study, conducted over two years at four community colleges, found that behavioral nudging had a significant impact on student persistence rates—with strong improvements among students of color and older adult learners, who are often underrepresented among graduates of STEM (science, technology, engineering, and math) programs.
“These results offer powerful evidence on the potential, and imperative, of using technology to support students during the most in-demand, and often most challenging, courses and majors,” said Maria Flynn, president and CEO of JFF. “With millions of STEM jobs going unfilled, closing the gap in STEM achievement has profound economic—and equity—implications.”
In a multiyear initiative called “Nudging to STEM Success,“ which was funded by the Helmsley Charitable Trust, JFF and Persistence Plus selected four colleges to implement the nudging initiative campuswide:Lakeland Community College in Kirtland, Ohio; Lorain County Community College in Elyria, Ohio; Stark State College in North Canton, Ohio; and John Tyler Community College in Chester, Virginia.
A randomized control trial in the summer of 2017 showed that the nudges increased first-to-second-year persistence for STEM students by 10 percentage points. The results of that trial will be presented in an upcoming peer-reviewed paper titled “A Summer Nudge Campaign to Motivate Community College STEM Students to Reenroll.” The paper will be published in AERA Open, an open-access journal published by the American Educational Research Association.
Following the 2017 trial, the four colleges scaled the support to nearly 10,000 students, and over the next two years, JFF and Persistence Plus found that the nudging support had a particularly strong impact on students of color and students over the age of 25—two groups that have historically had lower persistence rates than other students….(More)”.
To Regain Policy Competence: The Software of American Public Problem-Solving
Philip Zelikow at the Texas National Security Review: “Policymaking is a discipline, a craft, and a profession. Policymakers apply specialized knowledge — about other countries, politics, diplomacy, conflict, economics, public health, and more — to the practical solution of public problems. Effective policymaking is difficult. The “hardware” of policymaking — the tools and structures of government that frame the possibilities for useful work — are obviously important. Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.
Like policymaking, engineering is a discipline, a craft, and a profession. Engineers learn how to apply specialized knowledge — about chemistry, physics, biology, hydraulics, electricity, and more — to the solution of practical problems. Effective engineering is similarly difficult. People work hard to learn how to practice it with professional skill. But, unlike the methods taught for engineering, the software of policy work is rarely recognized or studied. It is not adequately taught. There is no canon or norms of professional practice. American policymaking is less about deliberate engineering, and is more about improvised guesswork and bureaucratized habits.
My experience is as a historian who studies the details of policy episodes and the related staff work, but also as a former official who has analyzed a variety of domestic and foreign policy issues at all three levels of American government, including federal work from different bureaucratic perspectives in five presidential administrations from Ronald Reagan to Barack Obama. From this historical and contemporary vantage point, I am struck (and a bit depressed) that the quality of U.S. policy engineering is actually much, much worse in recent decades than it was throughout much of the 20th century. This is not a partisan observation — the decline spans both Republican and Democratic administrations.
I am not alone in my observations. Francis Fukuyama recently concluded that, “[T]he overall quality of the American government has been deteriorating steadily for more than a generation,” notably since the 1970s. In the United States, “the apparently irreversible increase in the scope of government has masked a large decay in its quality.”1 This worried assessment is echoed by other nonpartisan and longtime scholars who have studied the workings of American government.2 The 2003 National Commission on Public Service observed,
The notion of public service, once a noble calling proudly pursued by the most talented Americans of every generation, draws an indifferent response from today’s young people and repels many of the country’s leading private citizens. … The system has evolved not by plan or considered analysis but by accretion over time, politically inspired tinkering, and neglect. … The need to improve performance is urgent and compelling.3
And they wrote that as the American occupation of Iraq was just beginning.
In this article, I offer hypotheses to help explain why American policymaking has declined, and why it was so much more effective in the mid-20th century than it is today. I offer a brief sketch of how American education about policy work evolved over the past hundred years, and I argue that the key software qualities that made for effective policy engineering neither came out of the academy nor migrated back into it.
I then outline a template for doing and teaching policy engineering. I break the engineering methods down into three interacting sets of analytical judgments: about assessment, design, and implementation. In teaching, I lean away from new, cumbersome standalone degree programs and toward more flexible forms of education that can pair more easily with many subject-matter specializations. I emphasize the value of practicing methods in detailed and more lifelike case studies. I stress the significance of an organizational culture that prizes written staff work of the quality that used to be routine but has now degraded into bureaucratic or opinionated dross….(More)”.
#Kremlin: Using Hashtags to Analyze Russian Disinformation Strategy and Dissemination on Twitter
Paper by Sarah Oates, and John Gray: “Reports of Russian interference in U.S. elections have raised grave concerns about the spread of foreign disinformation on social media sites, but there is little detailed analysis that links traditional political communication theory to social media analytics. As a result, it is difficult for researchers and analysts to gauge the nature or level of the threat that is disseminated via social media. This paper leverages both social science and data science by using traditional content analysis and Twitter analytics to trace how key aspects of Russian strategic narratives were distributed via #skripal, #mh17, #Donetsk, and #russophobia in late 2018.
This work will define how key Russian international communicative goals are expressed through strategic narratives, describe how to find hashtags that reflect those narratives, and analyze user activity around the hashtags. This tests both how Twitter amplifies specific information goals of the Russians as well as the relative success (or failure) of particular hashtags to spread those messages effectively. This research uses Mentionmapp, a system co-developed by one of the authors (Gray) that employs network analytics and machine intelligence to identify the behavior of Twitter users as well as generate profiles of users via posting history and connections. This study demonstrates how political communication theory can be used to frame the study of social media; how to relate knowledge of Russian strategic priorities to labels on social media such as Twitter hashtags; and to test this approach by examining a set of Russian propaganda narratives as they are represented by hashtags. Our research finds that some Twitter users are consistently active across multiple Kremlin-linked hashtags, suggesting that knowledge of these hashtags is an important way to identify Russian propaganda online influencers. More broadly, we suggest that Twitter dichotomies such as bot/human or troll/citizen should be used with caution and analysis should instead address the nuances in Twitter use that reflect varying levels of engagement or even awareness in spreading foreign disinformation online….(More)”.
Complex Systems Change Starts with Those Who Use the Systems
Madeleine Clarke & John Healy at Stanford Social Innovation Review: “Philanthropy, especially in the United States and Europe, is increasingly espousing the idea that transformative shifts in social care, education, and health systems are needed. Yet successful examples of systems-level reform are rare. Concepts such as collective impact (funder-driven, cross-sector collaboration), implementation science (methods to promote the systematic uptake of research findings), and catalytic philanthropy (funders playing a powerful role in mobilizing fundamental reforms) have gained prominence as pathways to this kind of change. These approaches tend to characterize philanthropy—usually foundations—as the central, heroic actor. Meanwhile, research on change within social and health services continues to indicate that deeply ingrained beliefs and practices, such as overly medicalized models of care for people with intellectual disabilities, and existing resource distribution, which often maintains the pay and conditions of professional groups, inhibits the introduction of reform into complex systems. A recent report by RAND, for example, showed that a $1 billion, seven-year initiative to improve teacher performance failed, and cited the complexity of the system and practitioners’ resistance to change as possible explanations.
We believe the most effective way to promote systems-level social change is to place the voices of people who use social services—the people for whom change matters most—at the center of change processes. But while many philanthropic organizations tout the importance of listening to the “end beneficiaries” or “service users,” the practice nevertheless remains an underutilized methodology for countering systemic obstacles to change and, ultimately, reforming complex systems….(More)”.
Data-Sharing in IoT Ecosystems From a Competition Law Perspective: The Example of Connected Cars
Paper by Wolfgang Kerber: “…analyses whether competition law can help to solve problems of access to data and interoperability in IoT ecosystems, where often one firm has exclusive control of the data produced by a smart device (and of the technical access to this device). Such a gatekeeper position can lead to the elimination of competition for aftermarket and other complementary services in such IoT ecosystems. This problem is analysed both from an economic and a legal perspective, and also generally for IoT ecosystems as well as for the much discussed problems of “access to in-vehicle data and re-sources” in connected cars, where the “extended vehicle” concept of the car manufacturers leads to such positions of exclusive control. The paper analyses, in particular, the competition rules about abusive behavior of dominant firms (Art. 102 TFEU) and of firms with “relative market power” (§ 20 (1) GWB) in German competition law. These provisions might offer (if appropriately applied and amended) at least some solutions for these data access problems. Competition law, however, might not be sufficient for dealing with all or most of these problems, i.e. that also additional solutions might be needed (data portability, direct data (access) rights, or sector-specific regulation)….(More)”.
Algorithmic Censorship on Social Platforms: Power, Legitimacy, and Resistance
Paper by Jennifer Cobbe: “Effective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation. This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist.
While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration.Jennifer CobbeEffective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation.
This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist. While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration….(More)”.
How big data can affect your bank account – and life
Alena Buyx, Barbara Prainsack and Aisling McMahon at The Conversation: “Mustafa loves good coffee. In his free time, he often browses high-end coffee machines that he cannot currently afford but is saving for. One day, travelling to a friend’s wedding abroad, he gets to sit next to another friend on the plane. When Mustafa complains about how much he paid for his ticket, it turns out that his friend paid less than half of what he paid, even though they booked around the same time.
He looks into possible reasons for this and concludes that it must be related to his browsing of expensive coffee machines and equipment. He is very angry about this and complains to the airline, who send him a lukewarm apology that refers to personalised pricing models. Mustafa feels that this is unfair but does not challenge it. Pursuing it any further would cost him time and money.
This story – which is hypothetical, but can and does occur – demonstrates the potential for people to be harmed by data use in the current “big data” era. Big data analytics involves using large amounts of data from many sources which are linked and analysed to find patterns that help to predict human behaviour. Such analysis, even when perfectly legal, can harm people.
Mustafa, for example, has likely been affected by personalised pricing practices whereby his search for high-end coffee machines has been used to make certain assumptions about his willingness to pay or buying power. This in turn may have led to his higher priced airfare. While this has not resulted in serious harm in Mustafa’s case, instances of serious emotional and financial harm are, unfortunately, not rare, including the denial of mortgages for individuals and risks to a person’s general credit worthiness based on associations with other individuals. This might happen if an individual shares some similar characteristics to other individuals who have poor repayment histories….(More)”.
Making data colonialism liveable: how might data’s social order be regulated?
Paper by Nick Couldry & Ulises Mejias: “Humanity is currently undergoing a large-scale social, economic and legal transformation based on the massive appropriation of social life through data extraction. This quantification of the social represents a new colonial move. While the modes, intensities, scales and contexts of dispossession have changed, the underlying drive of today’s data colonialism remains the same: to acquire “territory” and resources from which economic value can be extracted by capital. The injustices embedded in this system need to be made “liveable” through a new legal and regulatory order….(More)”.
Towards “Government as a Platform”? Preliminary Lessons from Australia, the United Kingdom and the United States
Paper by J. Ramon Gil‐Garcia, Paul Henman, and Martha Alicia Avila‐Maravilla: “In the last two decades, Internet portals have been used by governments around the world as part of very diverse strategies from service provision to citizen engagement. Several authors propose that there is an evolution of digital government reflected in the functionality and sophistication of these portals and other technologies. More recently, scholars and practitioners are proposing different conceptualizations of “government as a platform” and, for some, this could be the next stage of digital government. However, it is not clear what are the main differences between a sophisticated Internet portal and a platform. Therefore, based on an analysis of three of the most advanced national portals, this ongoing research paper explores to what extent these digital efforts clearly represent the basic characteristics of platforms. So, this paper explores questions such as: (1) to what extent current national portals reflect the characteristics of what has been called “government as a platform?; and (2) Are current national portals evolving towards “government as a platform”?…(More)”.