Is Software Eating the World?


Paper by Sangmin Aum & Yongseok Shin: “When explaining the declining labor income share in advanced economies, the macro literature finds that the elasticity of substitution between capital and labor is greater than one. However, the vast majority of micro-level estimates shows that capital and labor are complements (elasticity less than one). Using firm- and establishment-level data from Korea, we divide capital into equipment and software, as they may interact with labor in different ways. Our estimation shows that equipment and labor are complements (elasticity 0.6), consistent with other micro-level estimates, but software and labor are substitutes (1.6), a novel finding that helps reconcile the macro vs. micro-literature elasticity discord. As the quality of software improves, labor shares fall within firms because of factor substitution and endogenously rising markups. In addition, production reallocates toward firms that use software more intensively, as they become effectively more productive. Because in the data these firms have higher markups and lower labor shares, the reallocation further raises the aggregate markup and reduces the aggregate labor share. The rise of software accounts for two-thirds of the labor share decline in Korea between 1990 and 2018. The factor substitution and the markup channels are equally important. On the other hand, the falling equipment price plays a minor role, because the factor substitution and the markup channels offset each other…(More)”.

An Anatomy of Algorithm Aversion


Paper by Cass R. Sunstein and Jared Gaffe: “People are said to show “algorithm aversion” when (1) they prefer human forecasters or decision-makers to algorithms even though (2) algorithms generally outperform people (in forecasting accuracy and/or optimal decision-making in furtherance of a specified goal). Algorithm aversion also has “softer” forms, as when people prefer human forecasters or decision-makers to algorithms in the abstract, without having clear evidence about comparative performance. Algorithm aversion is a product of diverse mechanisms, including (1) a desire for agency; (2) a negative moral or emotional reaction to judgment by algorithms; (3) a belief that certain human experts have unique knowledge, unlikely to be held or used by algorithms; (4) ignorance about why algorithms perform well; and (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error. An understanding of the various mechanisms provides some clues about how to overcome algorithm aversion, and also of its boundary conditions…(More)”.

The Behavioral Scientists Working Toward a More Peaceful World


Interview by Heather Graci: “…Nation-level data doesn’t help us understand community-level conflict. Without understanding community-level conflict, it becomes much harder to design policies to prevent it.

Cikara: “So much of the data that we have is at the level of the nation, when our effects are all happening at very local levels. You see these reports that say, “In Germany, 14 percent of the population is immigrants.” It doesn’t matter at the national level, because they’re not distributed evenly across the geography. That means that some communities are going to be at greater risk for conflict than others. But that sort of local variation and sensitivity to it, at least heretofore, has really been missing from the conversation on the research side. Even when you’re in the same place, in the same country within the same state, the same canton, there can still be a ton of variation from neighborhood to neighborhood. 

“The other thing that we know matters a lot is not just the diversity of these neighborhoods but the segregation of them. It turns out that these kinds of prejudices and violence are less likely to break out in those places where it’s both diverse and people are interdigitated with how they live. So it’s not just the numbers, it’s also the spatial organization. 

“For example, in Singapore, because so much of the real estate is state-owned, they make it so that people who are coming from different countries can’t cluster together because they assign them to live separate from one another in order to prevent these sorts of enclaves. All these structural and meta-level organizational features have really, really important inputs for intergroup dynamics and psychology.”..(More)”.

Why policy failure is a prerequisite for innovation in the public sector


Blog by Philipp Trein and Thenia Vagionaki: “In our article entitled, “Why policy failure is a prerequisite for innovation in the public sector,” we explore the relationship between policy failure and innovation within public governance. Drawing inspiration from the “Innovator’s Dilemma,”—a theory from the management literature—we argue that the very nature of policymaking, characterized by myopia of voters, blame avoidance by decisionmakers, and the complexity (ill-structuredness) of societal challenges, has an inherent tendency to react with innovation only after failure of existing policies.  

Our analysis implies that we need to be more critical of what the policy process can achieve in terms of public sector innovation. Cognitive limitations tend to lead to a misperception of problems and inaccurate assessment of risks by decision makers according to the “Innovator’s Dilemma”.  This problem implies that true innovation (non-trivial policy changes) are unlikely to happen before an existing policy has failed visibly. However, our perspective does not want to paint a gloomy picture for public policy making but rather offers a more realistic interpretation of what public sector innovation can achieve. As a consequence, learning from experts in the policy process should be expected to correct failures in public sector problem-solving during the political process, rather than raise expectations beyond what is possible. 

The potential impact of our findings is profound. For practitioners and policymakers, this insight offers a new lens through which to evaluate the failure and success of public policies. Our work advocates a paradigm shift in how we perceive, manage, and learn from policy failures in the public sector, and for the expectations we have towards learning and the use of evidence in policymaking. By embracing the limitations of innovation in public policy, we can better manage expectations and structure the narrative regarding the capacity of public policy to address collective problems…(More)”.


The Character of Consent


Book by Meg Leta Jones about The History of Cookies and the Future of Technology Policy: “Consent pop-ups continually ask us to download cookies to our computers, but is this all-too-familiar form of privacy protection effective? No, Meg Leta Jones explains in The Character of Consent, rather than promote functionality, privacy, and decentralization, cookie technology has instead made the internet invasive, limited, and clunky. Good thing, then, that the cookie is set for retirement in 2024. In this eye-opening book, Jones tells the little-known story of this broken consent arrangement, tracing it back to the major transnational conflicts around digital consent over the last twenty-five years. What she finds is that the policy controversy is not, in fact, an information crisis—it’s an identity crisis.

Instead of asking how people consent, Jones asks who exactly is consenting and to what. Packed into those cookie pop-ups, she explains, are three distinct areas of law with three different characters who can consent. Within (mainly European) data protection law, the data subject consents. Within communication privacy law, the user consents. And within consumer protection law, the privacy consumer consents. These areas of law have very different histories, motivations, institutional structures, expertise, and strategies, so consent—and the characters who can consent—plays a unique role in those areas of law….(More)”.

Now we are all measuring impact — but is anything changing?


Article by Griffith Centre for Systems Innovation: “…Increasingly the landscape of Impact Measurement is crowded, dynamic and contains a diversity of frameworks and approaches — which can mean we end up feeling like we’re looking at alphabet soup.

As we’ve traversed this landscape we’ve tried to make sense of it in various ways, and have begun to explore a matrix to represent the constellation of frameworks, approaches and models we’ve encountered in the process. As shown below, the matrix has two axes:

The horizontal axis provides us with a “time” delineation. Dividing the left and right sides between retrospective (ex post) and prospective (ex-ante) approaches to measuring impact.

More specifically the retrospective quadrants include approaches/frameworks/models that ask about events in the past: What impact did we have? While the prospective quadrants include approaches that ask about the possible future: What impact will we have?

The vertical axis provides us with a “purpose” delineation. Dividing the upper and lower parts between Impact Measurement + Management and Evaluation

The top-level quadrants, Impact Measurement + Management, focus on methods that count quantifiable data (i.e. time, dollars, widgets). These frameworks tend to measure outputs from activities/interventions. They tend to ask the question what happened or what could happen and rely significantly on quantitative data.

The bottom-level Evaluation quadrants include a range of approaches that look at a broader range of questions beyond counting. They include questions like: what changed and why? What was or might the interrelationships between changes be? They tend to draw on a mixture of quantitative and qualitative data to create a more cohesive understanding of changes that occurred, are occurring or could occur.

A word of warning: As with all frameworks, this matrix is a “construct” — a way for us to engage in sense-making and to critically discuss how impact measurement is being undertaken in our current context. We are sharing this as a starting point for a broader discussion. We welcome feedback, reflections, and challenges around how we have represented different approaches — we are not seeking a ‘true representation’, but rather, a starting point for dialogue about how all the methods that now abound are connected, entangled and constructed…(More)”

Mapping Behavioral Public Policy


Book by Paolo Belardinelli: “This book provides a new perspective on behavioral public policy. The field of behavioral public policy has been dominated by the concept of ‘nudging’ over the last decade. As this book demonstrates, however, ‘nudging’ is one of many behavioral techniques that practitioners and policymakers can utilize in order to achieve their goals. The book discusses the advantages and disadvantages of these alternative techniques, and demonstrates empirically how the impact of ‘nudging’ and ‘non-nudging’ interventions are often dependent on varying political contexts and the degree of trust that citizens have toward policymakers. In doing so, it addresses the important question of how citizens understand and approve of the use of behavioral techniques by governments. The book will appeal to all those interested in public management, public policy, behavioral psychology, and ‘nudging’. ..(More)”.

Invisible Rulers: The People Who Turn Lies into Reality


Book by Renée DiResta: “…investigation into the way power and influence have been profoundly transformed reveals how a virtual rumor mill of niche propagandists increasingly shapes public opinion. While propagandists position themselves as trustworthy Davids, their reach, influence, and economics make them classic Goliaths—invisible rulers who create bespoke realities to revolutionize politics, culture, and society. Their work is driven by a simple maxim: if you make it trend, you make it true.
 
By revealing the machinery and dynamics of the interplay between influencers, algorithms, and online crowds, DiResta vividly illustrates the way propagandists deliberately undermine belief in the fundamental legitimacy of institutions that make society work. This alternate system for shaping public opinion, unexamined until now, is rewriting the relationship between the people and their government in profound ways. It has become a force so shockingly effective that its destructive power seems limitless. Scientific proof is powerless in front of it. Democratic validity is bulldozed by it. Leaders are humiliated by it. But they need not be.
 
With its deep insight into the power of propagandists to drive online crowds into battle—while bearing no responsibility for the consequences—Invisible Rulers not only predicts those consequences but offers ways for leaders to rapidly adapt and fight back…(More)”.

Superconvergence


Book by Jamie Metzl: “…explores how artificial intelligence, genome sequencing, gene editing, and other revolutionary technologies are transforming our lives, world, and future. These accelerating and increasingly interconnected technologies have the potential to improve our health, feed billions of people, supercharge our economies, store essential information for millions of years, and save our planet, but they can also―if we are not careful―do immeasurable harm.

The challenge we face is that while our ability to engineer the world around us is advancing exponentially, our processes for understanding the scope, scale, and implications of these changes, and for managing our godlike powers wisely, are only inching forward glacially…(More)”.

The Deliberative Turn in Democratic Theory


Book by Antonino Palumbo: “Thirty years of developments in deliberative democracy (DD) have consolidated this subfield of democratic theory. The acquired disciplinary prestige has made theorist and practitioners very confident about the ability of DD to address the legitimacy crisis experienced by liberal democracies at present at both theoretical and practical levels. The book advance a critical analysis of these developments that casts doubts on those certainties — current theoretical debates are reproposing old methodological divisions, and are afraid to move beyond the minimalist model of democracy advocated by liberal thinkers; democratic experimentation at the micro-level seems to have no impact at the macro-level, and remain sets of isolated experiences. The book indicates that those defects are mainly due to the liberal minimalist frame of reference within which reflection in democratic theory and practice takes place. Consequently, it suggests to move beyond liberal understandings of democracy as a game in need of external rules, and adopt instead a vision of democracy as a self-correcting metagame…(More)”.