Revisiting the Behavioral Revolution in Economics 

Article by Antara Haldar: “But the impact of the behavioral revolution outside of microeconomics remains modest. Many scholars are still skeptical about incorporating psychological insights into economics, a field that often models itself after the natural sciences, particularly physics. This skepticism has been further compounded by the widely publicized crisis of replication in psychology.

Macroeconomists, who study the aggregate functioning of economies and explore the impact of factors such as output, inflation, exchange rates, and monetary and fiscal policy, have, in particular, largely ignored the behavioral trend. Their indifference seems to reflect the belief that individual idiosyncrasies balance out, and that the quirky departures from rationality identified by behavioral economists must offset each other. A direct implication of this approach is that quantitative analyses predicated on value-maximizing behavior, such as the dynamic stochastic general equilibrium models that dominate policymaking, need not be improved.

The validity of these assumptions, however, remains uncertain. During banking crises such as the Great Recession of 2008 or the ongoing crisis triggered by the recent collapse of Silicon Valley Bank, the reactions of economic actors – particularly financial institutions and investors – appear to be driven by herd mentality and what John Maynard Keynes referred to as “animal spirits.”…

The roots of economics’ resistance to the behavioral sciences run deep. Over the past few decades, the field has acknowledged exceptions to the prevailing neoclassical paradigm, such as Elinor Ostrom’s solutions to the tragedy of the commons and Akerlof, Michael Spence, and Joseph E. Stiglitz’s work on asymmetric information (all four won the Nobel Prize). At the same time, economists have refused to update the discipline’s core assumptions.

This state of affairs can be likened to an imperial government that claims to uphold the rule of law in its colonies. By allowing for a limited release of pressure at the periphery of the paradigm, economists have managed to prevent significant changes that might undermine the entire system. Meanwhile, the core principles of the prevailing economic model remain largely unchanged.

For economics to reflect human behavior, much less influence it, the discipline must actively engage with human psychology. But as the list of acknowledged exceptions to the neoclassical framework grows, each subsequent breakthrough becomes a potentially existential challenge to the field’s established paradigm, undermining the seductive parsimony that has been the source of its power.

By limiting their interventions to nudges, behavioral economists hoped to align themselves with the discipline. But in doing so, they delivered a ratings-conscious “made for TV” version of a revolution. As Gil Scott-Heron famously reminded us, the real thing will not be televised….(More)”.

How We Ruined The Internet

Paper by Micah Beck and Terry Moore: “At the end of the 19th century the logician C.S. Peirce coined the term “fallibilism” for the “… the doctrine that our knowledge is never absolute but always swims, as it were, in a continuum of uncertainty and of indeterminacy”. In terms of scientific practice, this means we are obliged to reexamine the assumptions, the evidence, and the arguments for conclusions that subsequent experience has cast into doubt. In this paper we examine an assumption that underpinned the development of the Internet architecture, namely that a loosely synchronous point-to-point datagram delivery service could adequately meet the needs of all network applications, including those which deliver content and services to a mass audience at global scale. We examine how the inability of the Networking community to provide a public and affordable mechanism to support such asynchronous point-to-multipoint applications led to the development of private overlay infrastructure, namely CDNs and Cloud networks, whose architecture stands at odds with the Open Data Networking goals of the early Internet advocates. We argue that the contradiction between those initial goals and the monopolistic commercial imperatives of hypergiant overlay infrastructure operators is an important reason for the apparent contradiction posed by the negative impact of their most profitable applications (e.g., social media) and strategies (e.g., targeted advertisement). We propose that, following the prescription of Peirce, we can only resolve this contradiction by reconsidering some of our deeply held assumptions…(More)”.

How civic capacity gets urban social innovations started

Article by Christof Brandtner: “After President Trump withdrew from the Paris Climate Accords, several hundred mayors signed national and global treaties announcing their commitments to “step up and do more,” as a senior official of the City of New York told me in a poorly lit room in 2017. Cities were rushing to the forefront of adopting practices and policies to address contemporary social and environmental problems, such as climate change.

What the general enthusiasm masked is significant variation in the extent and speed at which cities adopt these innovations…My study of the geographic dispersion of green buildings certified with the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) rating system, published in the American Journal of Sociology, suggests that the organizational communities within cities play a significant role in adopting urban innovations. Cities with a robust civic capacity, where values-oriented organizations actively address social problems, are more likely to adopt new practices quickly and extensively. Civic capacity matters not only through structural channels, as a sign of ample resources and community social capital, but also through organizational channels. Values-oriented organizations are often early adopters of new practices, such as green construction, solar panels, electric vehicles, or equitable hiring practices. By creating proofs of concepts, these early adopters can serve as catalysts of municipal policies and widespread adoption…(More)”.

Digital Technologies in Emerging Countries

Open Access Book edited by Francis Fukuyama and Marietje Schaake: “…While there has been a tremendous upsurge in scholarly research into the political and social impacts of digital technologies, the vast majority of this work has tended to focus on rich countries in North America and Europe. Both regions had high levels of internet penetration and the state capacity to take on—potentially, at any rate—regulatory issues raised by digitization….The current volume is an initial effort to rectify the imbalance in the way that centers and programs such as ours look at the world, by focusing on what might broadly be labeled the “global south,” which we have labeled “emerging countries” (ECs). Countries and regions outside of North America and Europe face similar opportunities and challenges to those developed regions, but also problems that are unique to themselves…(More)”.

The History of Rules

Interview with Lorraine Daston: “The rules book began with an everyday observation of the dazzling variety and ubiquity of rules. Every culture has rules, but they’re all different.

I eventually settled on three major meanings of rules: rules as laws, rules as algorithms, and finally, rules as models. The latter meaning was predominant in the Western tradition until the end of the 18th century, and I set out to trace what happened to rules as models, but also the rise of algorithmic rules. It’s hard to imagine now, but the word algorithm didn’t even have an entry in the most comprehensive mathematical encyclopedias of the late 19th century.

To get at these changes over time, I cast my nets very wide. I looked at cookbooks, I looked at the rules of warfare. I looked at rules of games. I looked at rules of monastic orders and traffic regulations, sumptuary regulations, spelling rules, and of course algorithms for how to calculate. And if there’s one take-home message from the book, it is a distinction between thick and thin rules.

Thick rules are rules that come upholstered with all manner of qualifications, examples, caveats, and exceptions. They are rules that are braced to confront a world in which recalcitrant particulars refuse to conform to universals—as opposed to thin rules, of which algorithms are perhaps the best prototype: thin rules are formulated without attention to circumstances. Thin rules brook no quarter, they offer no sense of a variable world. Many bureaucratic rules, especially bureaucratic rules in their Kafkaesque exaggeration, also fit this description.

The arc of the book is not to describe how thick rules became thin rules (because we still have thick and thin rules around us all the time), but rather to determine the point at which thick rules become necessary—when you must anticipate high variability and therefore must tweak your rule to fit circumstances—as opposed to the stable, predictable settings in which we turn to thin rules.

In some historically exceptional cases, thin rules can actually get a job done because the context can be standardized and stabilized…(More)”.

The Metaverse and Homeland Security

Report by Timothy Marler, Zara Fatima Abdurahaman, Benjamin Boudreaux, and Timothy R. Gulden: “The metaverse is an emerging concept and capability supported by multiple underlying emerging technologies, but its meaning and key characteristics can be unclear and will likely change over time. Thus, its relevance to some organizations, such as the U.S. Department of Homeland Security (DHS), can be unclear. This lack of clarity can lead to unmitigated threats and missed opportunities. It can also inhibit healthy public discourse and effective technology management generally. To help address these issues, this Perspective provides an initial review of the metaverse concept and how it might be relevant to DHS. As a critical first step with the analysis of any emerging technology, the authors review current definitions and identify key practical characteristics. Often, regardless of a precise definition, it is the fundamental capabilities that are central to discussion and management. Then, given a foundational understanding of what a metaverse entails, the authors summarize primary goals and relevant needs for DHS. Ultimately, in order to be relevant, technologies must align with actual needs for various organizations or users. By cross-walking exemplary DHS needs that stem from a variety of mission sets with pervasive characteristics of metaverses, the authors demonstrate that metaverses are, in fact, relevant to DHS. Finally, the authors identify specific threats and opportunities that DHS could proactively manage. Although this work focuses the discussion of threats and opportunities on DHS, it has broad implications. This work provides a foundation on which further discussions and research can build, minimizing disparities and discoordination in development and policy…(More)”.

Technological Obsolescence

Essay by Jonathan Coopersmith: “In addition to killing over a million Americans, Covid-19 revealed embarrassing failures of local, state, and national public health systems to accurately and effectively collect, transmit, and process information. To some critics and reporters, the visible and easily understood face of those failures was the continued use of fax machines.

In reality, the critics were attacking the symptom, not the problem. Instead of “why were people still using fax machines?,” the better question was “what factors made fax machines more attractive than more capable technologies?” Those answers provide a better window into the complex, evolving world of technological obsolescence, a key component of our modern world—and on a smaller scale, provide a template to decide whether the NAE and other organizations should retain their fax machines.

The marketing dictionary of Monash University Business School defines technological obsolescence as “when a technical product or service is no longer needed or wanted even though it could still be in working order.” Significantly, the source is a business school, which implies strong economic and social factors in decision making about technology.  

Determining technological obsolescence depends not just on creators and promoters of new technologies but also on users, providers, funders, accountants, managers, standards setters—and, most importantly, competing needs and options. In short, it’s complicated.  

Like most aspects of technology, perspectives on obsolescence depend on your position. If existing technology meets your needs, upgrading may not seem worth the resources needed (e.g., for purchase and training). If, on the other hand, your firm or organization depends on income from providing, installing, servicing, training, advising, or otherwise benefiting from a new technology, not upgrading could jeopardize your future, especially in a very competitive market. And if you cannot find the resources to upgrade, you—and your users—may incur both visible and invisible costs…(More)”.

The promise and pitfalls of the metaverse for science

Paper by Diego Gómez-Zará, Peter Schiffer & Dashun Wang: “The future of the metaverse remains uncertain and continues to evolve, as was the case for many technological advances of the past. Now is the time for scientists, policymakers and research institutions to start considering actions to capture the potential of the metaverse and take concrete steps to avoid its pitfalls. Proactive investments in the form of competitive grants, internal agency efforts and infrastructure building should be considered, supporting innovation and adaptation to the future in which the metaverse may be more pervasive in society.

Government agencies and other research funders could also have a critical role in funding and promoting interoperability and shared protocols among different metaverse technologies and environments. These aspects will help the scientific research community to ensure broad adoption and reproducibility. For example, government research agencies may create an open and publicly accessible metaverse platform with open-source code and standard protocols that can be translated to commercial platforms as needed. In the USA, an agency such as the National Institute of Standards and Technology could set standards for protocols that are suitable for the research enterprise or, alternatively, an international convention could set global standards. Similarly, an agency such as the National Institutes of Health could leverage its extensive portfolio of behavioural research and build and maintain a metaverse for human subjects studies. Within such an ecosystem, researchers could develop and implement their own research protocols with appropriate protections, standardized and reproducible conditions, and secure data management. A publicly sponsored research-focused metaverse — which could be cross-compatible with commercial platforms — may create and capture substantial value for science, from augmenting scientific productivity to protecting research integrity.

There are important precedents for this sort of action in that governments and universities have built open repositories for data in fields such as astronomy and crystallography, and both the US National Science Foundation and the US Department of Energy have built and maintained high-performance computing environments that are available to the broader research community. Such efforts could be replicated and adapted for emerging metaverse technologies, which would be especially beneficial for under-resourced institutions to access and leverage common resources. Critically, the encouragement of private sector innovation and the development of public–private alliances must be balanced with the need for interoperability, openness and accessibility to the broader research community…(More)”.

Best Practices for Disclosure and Citation When Using Artificial Intelligence Tools

Article by Mark Shope: “This article is intended to be a best practices guide for disclosing the use of artificial intelligence tools in legal writing. The article focuses on using artificial intelligence tools that aid in drafting textual material, specifically in law review articles and law school courses. The article’s approach to disclosure and citation is intended to be a starting point for authors, institutions, and academic communities to tailor based on their own established norms and philosophies. Throughout the entire article, the author has used ChatGPT to provide examples of how artificial intelligence tools can be used in writing and how the output of artificial intelligence tools can be expressed in text, including examples of how that use and text should be disclosed and cited. The article will also include policies for professors to use in their classrooms and journals to use in their submission guidelines…(More)”

A Global Digital Compact — an Open, Free and Secure Digital Future for All

UN Secretary General: “…The present brief proposes the development of a Global Digital Compact that would set out principles, objectives and actions for advancing an open, free, secure and human-centred digital future, one that is anchored in universal human rights and that enables the attainment of the Sustainable Development Goals. It outlines areas in which the need for multi-stakeholder digital cooperation is urgent and sets out how a Global Digital Compact can help to realize the commitment in the declaration on the commemoration of the seventy-fifth anniversary of the United Nations (General Assembly resolution 75/1) to “shaping a shared vision on digital cooperation” by providing an inclusive global framework. Such a framework is essential for the multi-stakeholder action required to overcome digital, data and innovation divides and to achieve the governance required for a sustainable digital future.
Our digital world is one of divides. In 2002, when governments first recognized the challenge of
the digital divide, 1 billion people had access to the Internet. Today, 5.3 billion people are digitally
connected, yet the divide persists across regions, gender, income, language, and age groups. Some 89 per cent of people in Europe are online, but only 21 per cent of women in low-income countries use the Internet. While digitally deliverable services now account for almost two thirds of global services trade, access is unaffordable in some parts of the world. The cost of a smartphone in South Asia and sub-Saharan Africa is more than 40 per cent of the average monthly income, and African users pay more than three times the global average for mobile data. Fewer than half of the world’s countries track digital
skills, and the data that exist highlight the depth of digital learning gaps. Two decades after the
World Summit on the Information Society, the digital divide is still a gulf.

Data divides are also growing. As data are collected and used in digital applications, they generate huge commercial and social value. While monthly global data traffic is forecast to grow by more than 400 per cent by 2026, activity is concentrated among a few global players. Many developing countries are at risk of becoming mere providers of raw data while having to pay for the services that their data help to produce…(More)”.