The Future of Community


Book by John Kraski and Justin Shenkarow: “… a groundbreaking new take on the seismic impact web3 is having—and will continue to have—on our technological and social landscapes. The authors discuss why web3 really is the “next big thing” to shape our digital and offline futures and how it will transform the world.

You’ll discover a whole host of web3 applications poised to excite and disrupt industries around the world, from fan tokens that reshape how we think about interactions between artists and fans to self-sovereign identities on the blockchain that allow you to take full control over how your personal data is used and collected online.

You’ll also find:

  • Insightful explorations of technologies and techniques like tokenization, decentralized marketplaces, decentralized autonomous organizations, and more
  • Explanations of how web3 allows you to take greater ownership and control of your digital and offline assets
  • Discussions of why web3 increases transparency and accountability at every level of business, government, and social hierarchies…(More)”.

Computational social science is growing up: why puberty consists of embracing measurement validation, theory development, and open science practices


Paper by Timon Elmer: “Puberty is a phase in which individuals often test the boundaries of themselves and surrounding others and further define their identity – and thus their uniqueness compared to other individuals. Similarly, as Computational Social Science (CSS) grows up, it must strike a balance between its own practices and those of neighboring disciplines to achieve scientific rigor and refine its identity. However, there are certain areas within CSS that are reluctant to adopt rigorous scientific practices from other fields, which can be observed through an overreliance on passively collected data (e.g., through digital traces, wearables) without questioning the validity of such data. This paper argues that CSS should embrace the potential of combining both passive and active measurement practices to capitalize on the strengths of each approach, including objectivity and psychological quality. Additionally, the paper suggests that CSS would benefit from integrating practices and knowledge from other established disciplines, such as measurement validation, theoretical embedding, and open science practices. Based on this argument, the paper provides ten recommendations for CSS to mature as an interdisciplinary field of research…(More)”.

The case for adaptive and end-to-end policy management


Article by Pia Andrews: “Why should we reform how we do policy? Simple. Because the gap between policy design and delivery has become the biggest barrier to delivering good public services and policy outcomes and is a challenge most public servants experience daily, directly or indirectly.

This gap wasn’t always the case, with policy design and delivery separated as part of the New Public Management reforms in the ’90s. When you also consider the accelerating rate of change, increasing cadence of emergencies, and the massive speed and scale of new technologies, you could argue that end-to-end policy reform is our most urgent problem to solve.

Policy teams globally have been exploring new design methods like human-centred design, test-driven iteration (agile), and multi-disciplinary teams that get policy end users in the room (eg, NSW Policy Lab). There has also been an increased focus on improving policy evaluation across the world (eg, the Australian Centre for Evaluation). In both cases, I’m delighted to see innovative approaches being normalised across the policy profession, but it has become obvious that improving design and/or evaluation is still far from sufficient to drive better (or more humane) policy outcomes in an ever-changing world. It is not only the current systemic inability to detect and respond to unintended consequences that emerge but the lack of policy agility that perpetuates issues even long after they might be identified.

Below I outline four current challenges for policy management and a couple of potential solutions, as something of a discussion starter

Problem 1) The separation of (and mutual incomprehension between) policy design, delivery and the public

The lack of multi-disciplinary policy design, combined with a set-and-forget approach to policy, combined with delivery teams being left to interpret policy instructions without support, combined with a gap and interpretation inconsistency between policy modelling systems and policy delivery systems, all combined with a lack of feedback loops in improving policy over time, has led to a series of black holes throughout the process. Tweaking the process as it currently stands will not fix the black holes. We need a more holistic model for policy design, delivery and management…(More)”.

Populist Leaders and the Economy


Paper by Manuel Funke, Moritz Schularick and Christoph Trebesch: “Populism at the country level is at an all-time high, with more than 25 percent of nations currently governed by populists. How do economies perform under populist leaders? We build a new long-run cross-country database to study the macroeconomic history of populism. We identify 51 populist presidents and prime ministers from 1900 to 2020 and show that the economic cost of populism is high. After 15 years, GDP per capita is 10 percent lower compared to a plausible nonpopulist counterfactual. Economic disintegration, decreasing macroeconomic stability, and the erosion of institutions typically go hand in hand with populist rule…(More)”.

Observer Theory


Article by Stephen Wolfram: “We call it perception. We call it measurement. We call it analysis. But in the end it’s about how we take the world as it is, and derive from it the impression of it that we have in our minds.

We might have thought that we could do science “purely objectively” without any reference to observers or their nature. But what we’ve discovered particularly dramatically in our Physics Project is that the nature of us as observers is critical even in determining the most fundamental laws we attribute to the universe.

But what ultimately does an observer—say like us—do? And how can we make a theoretical framework for it? Much as we have a general model for the process of computation—instantiated by something like a Turing machine—we’d like to have a general model for the process of observation: a general “observer theory”.

Central to what we think of as an observer is the notion that the observer will take the raw complexity of the world and extract from it some reduced representation suitable for a finite mind. There might be zillions of photons impinging on our eyes, but all we extract is the arrangement of objects in a visual scene. Or there might be zillions of gas molecules impinging on a piston, yet all we extract is the overall pressure of the gas.

In the end, we can think of it fundamentally as being about equivalencing. There are immense numbers of different individual configurations for the photons or the gas molecules—that are all treated as equivalent by an observer who’s just picking out the particular features needed for some reduced representation.

There’s in a sense a certain duality between computation and observation. In computation one’s generating new states of a system. In observation, one’s equivalencing together different states.

That equivalencing must in the end be implemented “underneath” by computation. But in observer theory what we want to do is just characterize the equivalencing that’s achieved. For us as observers it might in practice be all about how our senses work, what our biological or cultural nature is—or what technological devices or structures we’ve built. But what makes a coherent concept of observer theory possible is that there seem to be general, abstract characterizations that capture the essence of different kinds of observers…(More)”.

WikiCrow: Automating Synthesis of Human Scientific Knowledge


About: “As scientists, we stand on the shoulders of giants. Scientific progress requires curation and synthesis of prior knowledge and experimental results. However, the scientific literature is so expansive that synthesis, the comprehensive combination of ideas and results, is a bottleneck. The ability of large language models to comprehend and summarize natural language will  transform science by automating the synthesis of scientific knowledge at scale. Yet current LLMs are limited by hallucinations, lack access to the most up-to-date information, and do not provide reliable references for statements.

Here, we present WikiCrow, an automated system that can synthesize cited Wikipedia-style summaries for technical topics from the scientific literature. WikiCrow is built on top of Future House’s internal LLM agent platform, PaperQA, which in our testing, achieves state-of-the-art (SOTA) performance on a retrieval-focused version of PubMedQA and other benchmarks, including a new retrieval-first benchmark, LitQA, developed internally to evaluate systems retrieving full-text PDFs across the entire scientific literature.

As a demonstration of the potential for AI to impact scientific practice, we use WikiCrow to generate draft articles for the 15,616 human protein-coding genes that currently lack Wikipedia articles, or that have article stubs. WikiCrow creates articles in 8 minutes, is much more consistent than human editors at citing its sources, and makes incorrect inferences or statements about 9% of the time, a number that we expect to improve as we mature our systems. WikiCrow will be a foundational tool for the AI Scientists we plan to build in the coming years, and will help us to democratize access to scientific research…(More)”.

Governing the economics of the common good


Paper by Mariana Mazzucato: “To meet today’s grand challenges, economics requires an understanding of how common objectives may be collaboratively set and met. Tied to the assumption that the state can, at best, fix market failures and is always at risk of ‘capture’, economic theory has been unable to offer such a framework. To move beyond such limiting assumptions, the paper provides a renewed conception of the common good, going beyond the classic public good and commons approach, as a way of steering and shaping (rather than just fixing) the economy towards collective goals…(More)”.

When Science Meets Power


Book by Geoff Mulgan: “Science and politics have collaborated throughout human history, and science is repeatedly invoked today in political debates, from pandemic management to climate change. But the relationship between the two is muddled and muddied.

Leading policy analyst Geoff Mulgan here calls attention to the growing frictions caused by the expanding authority of science, which sometimes helps politics but often challenges it.

He dissects the complex history of states’ use of science for conquest, glory and economic growth and shows the challenges of governing risk – from nuclear weapons to genetic modification, artificial intelligence to synthetic biology. He shows why the governance of science has become one of the biggest challenges of the twenty-first century, ever more prominent in daily politics and policy.

Whereas science is ordered around what we know and what is, politics engages what we feel and what matters. How can we reconcile the two, so that crucial decisions are both well informed and legitimate?

The book proposes new ways to organize democracy and government, both within nations and at a global scale, to better shape science and technology so that we can reap more of the benefits and fewer of the harms…(More)”.

Remote collaboration fuses fewer breakthrough ideas


Paper by Yiling Lin, Carl Benedikt Frey & Lingfei Wu: “Theories of innovation emphasize the role of social networks and teams as facilitators of breakthrough discoveries. Around the world, scientists and inventors are more plentiful and interconnected today than ever before. However, although there are more people making discoveries, and more ideas that can be reconfigured in new ways, research suggests that new ideas are getting harder to find—contradicting recombinant growth theory. Here we shed light on this apparent puzzle. Analysing 20 million research articles and 4 million patent applications from across the globe over the past half-century, we begin by documenting the rise of remote collaboration across cities, underlining the growing interconnectedness of scientists and inventors globally. We further show that across all fields, periods and team sizes, researchers in these remote teams are consistently less likely to make breakthrough discoveries relative to their on-site counterparts. Creating a dataset that allows us to explore the division of labour in knowledge production within teams and across space, we find that among distributed team members, collaboration centres on late-stage, technical tasks involving more codified knowledge. Yet they are less likely to join forces in conceptual tasks—such as conceiving new ideas and designing research—when knowledge is tacit. We conclude that despite striking improvements in digital technology in recent years, remote teams are less likely to integrate the knowledge of their members to produce new, disruptive ideas…(More)”.

GovTech in Fragile and Conflict Situations Trends, Challenges, and Opportunities


Report by the World Bank: “This report takes stock of the development of GovTech solutions in Fragile and Conflict-Affected Situations (FCS), be they characterized by low institutional capacity and/or by active conflict and provides insights on challenges and opportunities for implementing GovTech reforms in such contexts. It is aimed at practitioners and policy makers working in FCS but will also be useful for practitioners working in Fragility, Conflict, and Violence (FCV) contexts, at-risk countries, or low-income countries as some similar challenges and opportunities can be present…(More)”.