The Upside of Deep Fakes


Paper by Jessica M. Silbey and Woodrow Hartzog: “It’s bad. We know. The dawn of “deep fakes” — convincing videos and images of people doing things they never did or said — puts us all in jeopardy in several different ways. Professors Bobby Chesney and Danielle Citron have noted that now “false claims — even preposterous ones — can be peddled with unprecedented success today thanks to a combination of social media ubiquity and virality, cognitive biases, filter bubbles, and group polarization.” The scholars identify a host of harms from deep fakes, ranging from people being exploited, extorted, and sabotaged, to societal harms like the erosion of democratic discourse and trust in social institutions, undermining public safety, national security, journalism, and diplomacy, deepening social divisions, and manipulation of elections. But it might not be all bad. Even beyond purported beneficial uses of deep-fake technology for education, art, and science, the looming deep-fake disaster might have a silver lining. Hear us out. We think deep fakes have an upside.

Crucial to our argument is the idea that deep fakes don’t create new problems so much as make existing problems worse. Cracks in systems, frameworks, strategies, and institutions that have been leaking for years now threaten to spring open. Journalism, education, individual rights, democratic systems, and voting protocols have long been vulnerable. Deep fakes might just be the straw that breaks them. And therein lies opportunity for repair. Below we briefly address some deep problems and how finally addressing them may also neutralize the destructive force of deep fakes. We only describe three cultural institutions – education, journalism, and representative democracy — with deep problems that could be strengthened as a response to deep fakes for greater societal gains. But we encourage readers to think up more. We have a hunch that once we harness the upside of deep fakes, we may unlock creative solutions to other sticky social and political problems…(More)”.

The Social Afterlife


Paper by Andrew Gilden: “Death is not what it used to be. With the rise of social media and advances in digital technology, postmortem decision-making increasingly involves difficult questions about the ongoing social presence of the deceased. Should a Twitter account keep tweeting? Should a YouTube singer keep singing? Should Tinder photos be swiped left for the very last time? The traditional touchstones of effective estate planning — reducing transaction costs and maximizing estate value — do little to guide this new social afterlife. Managing a person’s legacy has shifted away from questions of financial investment and asset management to questions of emotional and cultural stewardship. This Article brings together the diverse areas of law that shape a person’s legacy and develops a new framework for addressing the evolving challenges of legacy stewardship

This Article makes two main contributions. First, it identifies and critically examines the four models of stewardship that currently structure the laws of legacy: (1) the “freedom of disposition” model dominant in the laws of wills and trusts, (2) the “family inheritance” model dominant in copyright law, (3) the “public domain” model dominant in many states’ publicity rights laws, and (4) the “consumer contract” model dominant in over forty states’ new digital assets laws. Second, this Article develops a new stewardship model, which it calls the “decentered decedent.” The decentered decedent model recognizes that individuals occupy heterogenous social contexts, and it channels postmortem decision-making into each of those contexts. Unlike existing stewardship models, this new model does not try to centralize stewardship decisions in any one stakeholder — the family, the public, the market, or even the decedent themselves. Instead, the decentered decedent model distributes stewardship across the diverse, dispersed communities that we all leave behind….(More)”.

When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance


Paper by David Rozas , Antonio Tenorio-Fornés , Silvia Díaz-Molina , and Samer Hassan: “Blockchain technologies have generated excitement, yet their potential to enable new forms of governance remains largely unexplored. Two confronting standpoints dominate the emergent debate around blockchain-based governance: discourses characterised by the presence of techno-determinist and market-driven values, which tend to ignore the complexity of social organisation; and critical accounts of such discourses which, whilst contributing to identifying limitations, consider the role of traditional centralised institutions as inherently necessary to enable democratic forms of governance. Therefore the question arises, can we build perspectives of blockchain-based governance that go beyond markets and states?

In this article we draw on the Nobel laureate economist Elinor Ostrom’s principles for self-governance of communities to explore the transformative potential of blockchain. We approach blockchain through the identification and conceptualisation of affordances that this technology may provide to communities. For each affordance, we carry out a detailed analysis situating each in the context of Ostrom’s principles, considering both the potentials of algorithmic governance and the importance of incorporating communities’ social practices. The relationships found between these affordances and Ostrom’s principles allow us to provide a perspective focussed on blockchain-based commons governance. By carrying out this analysis, we aim to expand the debate from one dominated by a culture of competition to one that promotes a culture of cooperation…(More)”.

Agora: Towards An Open Ecosystem for Democratizing Data Science & Artificial Intelligence


Paper by Jonas Traub et al: “Data science and artificial intelligence are driven by a plethora of diverse data-related assets including datasets, data streams, algorithms, processing software, compute resources, and domain knowledge. As providing all these assets requires a huge investment, data sciences and artificial intelligence are currently dominated by a small number of providers who can afford these investments. In this paper, we present a vision of a data ecosystem to democratize data science and artificial intelligence. In particular, we envision a data infrastructure for fine-grained asset exchange in combination with scalable systems operation. This will overcome lock-in effects and remove entry barriers for new asset providers. Our goal is to enable companies, research organizations, and individuals to have equal access to data, data science, and artificial intelligence. Such an open ecosystem has recently been put on the agenda of several governments and industrial associations. We point out the requirements and the research challenges as well as outline an initial data infrastructure architecture for building such a data ecosystem…(More)”.

Experimental Innovation Policy


Paper by Albert Bravo-Biosca: “Experimental approaches are increasingly being adopted across many policy fields, but innovation policy has been lagging. This paper reviews the case for policy experimentation in this field, describes the different types of experiments that can be undertaken, discusses some of the unique challenges to the use of experimental approaches in innovation policy, and summarizes some of the emerging lessons, with a focus on randomized trials. The paper concludes describing how at the Innovation Growth Lab we have been working with governments across the OECD to help them overcome the barriers to policy experimentation in order to make their policies more impactful….(More)”.

Weaponized Interdependence: How Global Economic Networks Shape State Coercion


Henry Farrell and Abraham L. Newman in International Security: “Liberals claim that globalization has led to fragmentation and decentralized networks of power relations. This does not explain how states increasingly “weaponize interdependence” by leveraging global networks of informational and financial exchange for strategic advantage. The theoretical literature on network topography shows how standard models predict that many networks grow asymmetrically so that some nodes are far more connected than others. This model nicely describes several key global economic networks, centering on the United States and a few other states. Highly asymmetric networks allow states with (1) effective jurisdiction over the central economic nodes and (2) appropriate domestic institutions and norms to weaponize these structural advantages for coercive ends. In particular, two mechanisms can be identified. First, states can employ the “panopticon effect” to gather strategically valuable information. Second, they can employ the “chokepoint effect” to deny network access to adversaries. Tests of the plausibility of these arguments across two extended case studies that provide variation both in the extent of U.S. jurisdiction and in the presence of domestic institutions—the SWIFT financial messaging system and the internet—confirm the framework’s expectations. A better understanding of the policy implications of the use and potential overuse of these tools, as well as the response strategies of targeted states, will recast scholarly debates on the relationship between economic globalization and state coercion….(More)”

Community Colleges Boost STEM Student Success Through Behavioral Nudging


Press Release: “JFF, a national nonprofit driving transformation in the American workforce and education systems, and Persistence Plus, which pairs behavioral insights with intelligent text messaging to improve student success, today released the findings from an analysis that examined the effects of personalized nudging on nearly 10,000 community college students. The study, conducted over two years at four community colleges, found that behavioral nudging had a significant impact on student persistence rates—with strong improvements among students of color and older adult learners, who are often underrepresented among graduates of STEM (science, technology, engineering, and math) programs.

“These results offer powerful evidence on the potential, and imperative, of using technology to support students during the most in-demand, and often most challenging, courses and majors,” said Maria Flynn, president and CEO of JFF. “With millions of STEM jobs going unfilled, closing the gap in STEM achievement has profound economic—and equity—implications.” 

In a multiyear initiative called “Nudging to STEM Success, which was funded by the Helmsley Charitable Trust, JFF and Persistence Plus selected four colleges to implement the nudging initiative campuswide:Lakeland Community College in Kirtland, Ohio; Lorain County Community College in Elyria, Ohio; Stark State College in North Canton, Ohio; and John Tyler Community College in Chester, Virginia.

A randomized control trial in the summer of 2017 showed that the nudges increased first-to-second-year persistence for STEM students by 10 percentage points. The results of that trial will be presented in an upcoming peer-reviewed paper titled “A Summer Nudge Campaign to Motivate Community College STEM Students to Reenroll.” The paper will be published in AERA Open, an open-access journal published by the American Educational Research Association. 

Following the 2017 trial, the four colleges scaled the support to nearly 10,000 students, and over the next two years, JFF and Persistence Plus found that the nudging support had a particularly strong impact on students of color and students over the age of 25—two groups that have historically had lower persistence rates than other students….(More)”.

To Regain Policy Competence: The Software of American Public Problem-Solving


Philip Zelikow at the Texas National Security Review: “Policymaking is a discipline, a craft, and a profession. Policymakers apply specialized knowledge — about other countries, politics, diplomacy, conflict, economics, public health, and more — to the practical solution of public problems. Effective policymaking is difficult. The “hardware” of policymaking — the tools and structures of government that frame the possibilities for useful work — are obviously important. Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.

Like policymaking, engineering is a discipline, a craft, and a profession. Engineers learn how to apply specialized knowledge — about chemistry, physics, biology, hydraulics, electricity, and more — to the solution of practical problems. Effective engineering is similarly difficult. People work hard to learn how to practice it with professional skill. But, unlike the methods taught for engineering, the software of policy work is rarely recognized or studied. It is not adequately taught. There is no canon or norms of professional practice. American policymaking is less about deliberate engineering, and is more about improvised guesswork and bureaucratized habits.

My experience is as a historian who studies the details of policy episodes and the related staff work, but also as a former official who has analyzed a variety of domestic and foreign policy issues at all three levels of American government, including federal work from different bureaucratic perspectives in five presidential administrations from Ronald Reagan to Barack Obama. From this historical and contemporary vantage point, I am struck (and a bit depressed) that the quality of U.S. policy engineering is actually much, much worse in recent decades than it was throughout much of the 20th century. This is not a partisan observation — the decline spans both Republican and Democratic administrations.

I am not alone in my observations. Francis Fukuyama recently concluded that, “[T]he overall quality of the American government has been deteriorating steadily for more than a generation,” notably since the 1970s. In the United States, “the apparently irreversible increase in the scope of government has masked a large decay in its quality.”1 This worried assessment is echoed by other nonpartisan and longtime scholars who have studied the workings of American government.2 The 2003 National Commission on Public Service observed,

The notion of public service, once a noble calling proudly pursued by the most talented Americans of every generation, draws an indifferent response from today’s young people and repels many of the country’s leading private citizens. … The system has evolved not by plan or considered analysis but by accretion over time, politically inspired tinkering, and neglect. … The need to improve performance is urgent and compelling.3

And they wrote that as the American occupation of Iraq was just beginning.

In this article, I offer hypotheses to help explain why American policymaking has declined, and why it was so much more effective in the mid-20th century than it is today. I offer a brief sketch of how American education about policy work evolved over the past hundred years, and I argue that the key software qualities that made for effective policy engineering neither came out of the academy nor migrated back into it.

I then outline a template for doing and teaching policy engineering. I break the engineering methods down into three interacting sets of analytical judgments: about assessment, design, and implementation. In teaching, I lean away from new, cumbersome standalone degree programs and toward more flexible forms of education that can pair more easily with many subject-matter specializations. I emphasize the value of practicing methods in detailed and more lifelike case studies. I stress the significance of an organizational culture that prizes written staff work of the quality that used to be routine but has now degraded into bureaucratic or opinionated dross….(More)”.

Data-Sharing in IoT Ecosystems From a Competition Law Perspective: The Example of Connected Cars


Paper by Wolfgang Kerber: “…analyses whether competition law can help to solve problems of access to data and interoperability in IoT ecosystems, where often one firm has exclusive control of the data produced by a smart device (and of the technical access to this device). Such a gatekeeper position can lead to the elimination of competition for aftermarket and other complementary services in such IoT ecosystems. This problem is analysed both from an economic and a legal perspective, and also generally for IoT ecosystems as well as for the much discussed problems of “access to in-vehicle data and re-sources” in connected cars, where the “extended vehicle” concept of the car manufacturers leads to such positions of exclusive control. The paper analyses, in particular, the competition rules about abusive behavior of dominant firms (Art. 102 TFEU) and of firms with “relative market power” (§ 20 (1) GWB) in German competition law. These provisions might offer (if appropriately applied and amended) at least some solutions for these data access problems. Competition law, however, might not be sufficient for dealing with all or most of these problems, i.e. that also additional solutions might be needed (data portability, direct data (access) rights, or sector-specific regulation)….(More)”.

Algorithmic Censorship on Social Platforms: Power, Legitimacy, and Resistance


Paper by Jennifer Cobbe: “Effective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation. This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist.

While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration.Jennifer CobbeEffective content moderation by social platforms has long been recognised as both important and difficult, with numerous issues arising from the volume of information to be dealt with, the culturally sensitive and contextual nature of that information, and the nuances of human communication. Attempting to scale moderation efforts, various platforms have adopted, or signalled their intention to adopt, increasingly automated approaches to identifying and suppressing content and communications that they deem undesirable. However, algorithmic forms of online censorship by social platforms bring their own concerns, including the extensive surveillance of communications and the use of machine learning systems with the distinct possibility of errors and biases. This paper adopts a governmentality lens to examine algorithmic censorship by social platforms in order to assist in the development of a more comprehensive understanding of the risks of such approaches to content moderation.

This analysis shows that algorithmic censorship is distinctive for two reasons: (1) it would potentially bring all communications carried out on social platforms within reach, and (2) it would potentially allow those platforms to take a much more active, interventionist approach to moderating those communications. Consequently, algorithmic censorship could allow social platforms to exercise an unprecedented degree of control over both public and private communications, with poor transparency, weak or non-existent accountability mechanisms, and little legitimacy. Moreover, commercial considerations would be inserted further into the everyday communications of billions of people. Due to the dominance of the web by a small number of social platforms, this control may be difficult or impractical to escape for many people, although opportunities for resistance do exist. While automating content moderation may seem like an attractive proposition for both governments and platforms themselves, the issues identified in this paper are cause for concern and should be given serious consideration….(More)”.