Commerce Secretary’s Comments Raise Fears of Interference in Federal Data


Article by Ben Casselman and Colby Smith: “Comments from a member of President Trump’s cabinet over the weekend have renewed concerns that the new administration could seek to interfere with federal statistics — especially if they start to show that the economy is slipping into a recession.

In an interview on Fox News on Sunday, Howard Lutnick, the commerce secretary, suggested that he planned to change the way the government reports data on gross domestic product in order to remove the impact of government spending.

“You know that governments historically have messed with G.D.P.,” he said. “They count government spending as part of G.D.P. So I’m going to separate those two and make it transparent.”

It wasn’t immediately clear what Mr. Lutnick meant. The basic definition of gross domestic product is widely accepted internationally and has been unchanged for decades. It tallies consumer spending, private-sector investment, net exports, and government investment and spending to arrive at a broad measure of all goods and services produced in a country.The Bureau of Economic Analysis, which is part of Mr. Lutnick’s department, already produces a detailed breakdown of G.D.P. into its component parts. Many economists focus on a measure — known as “final sales to private domestic purchasers” — that excludes government spending and is often seen as a better indicator of underlying demand in the economy. That measure has generally shown stronger growth in recent quarters than overall G.D.P. figures.

In recent weeks, however, there have been mounting signs elsewhere that the economy could be losing momentumConsumer spending fell unexpectedly in January, applications for unemployment insurance have been creeping upward, and measures of housing construction and home sales have turned down. A forecasting model from the Federal Reserve Bank of Atlanta predicts that G.D.P. could contract sharply in the first quarter of the year, although most private forecasters still expect modest growth.

Cuts to federal spending and the federal work force could act as a further drag on economic growth in coming months. Removing federal spending from G.D.P. calculations, therefore, could obscure the impact of the administration’s policies…(More)”.

Citizen participation and technology: lessons from the fields of deliberative democracy and science and technology studies


Paper by Julian “Iñaki” Goñi: “Calls for democratising technology are pervasive in current technological discourse. Indeed, participating publics have been mobilised as a core normative aspiration in Science and Technology Studies (STS), driven by a critical examination of “expertise”. In a sense, democratic deliberation became the answer to the question of responsible technological governance, and science and technology communication. On the other hand, calls for technifying democracy are ever more pervasive in deliberative democracy’s discourse. Many new digital tools (“civic technologies”) are shaping democratic practice while navigating a complex political economy. Moreover, Natural Language Processing and AI are providing novel alternatives for systematising large-scale participation, automated moderation and setting up participation. In a sense, emerging digital technologies became the answer to the question of how to augment collective intelligence and reconnect deliberation to mass politics. In this paper, I explore the mutual shaping of (deliberative) democracy and technology (studies), highlighting that without careful consideration, both disciplines risk being reduced to superficial symbols in discourses inclined towards quick solutionism. This analysis highlights the current disconnect between Deliberative Democracy and STS, exploring the potential benefits of fostering closer links between the two fields. Drawing on STS insights, the paper argues that deliberative democracy could be enriched by a deeper engagement with the material aspects of democratic processes, the evolving nature of civic technologies through use, and a more critical approach to expertise. It also suggests that STS scholars would benefit from engaging more closely with democratic theory, which could enhance their analysis of public participation, bridge the gap between descriptive richness and normative relevance, and offer a more nuanced understanding of the inner functioning of political systems and politics in contemporary democracies…(More)”.

Future of AI Research


Report by the Association for the Advancement of Artificial Intelligence:  “As AI capabilities evolve rapidly, AI research is also undergoing a fast and significant transformation along many dimensions, including its topics, its methods, the research community, and the working environment. Topics such as AI reasoning and agentic AI have been studied for decades but now have an expanded scope in light of current AI capabilities and limitations. AI ethics and safety, AI for social good, and sustainable AI have become central themes in all major AI conferences. Moreover, research on AI algorithms and software systems is becoming increasingly tied to substantial amounts of dedicated AI hardware, notably GPUs, which leads to AI architecture co-creation, in a way that is more prominent now than over the last 3 decades. Related to this shift, more and more AI researchers work in corporate environments, where the necessary hardware and other resources are more easily available, compared to academia, questioning the roles of academic AI research, student retention, and faculty recruiting. The pervasive use of AI in our daily lives and its impact on people, society, and the environment makes AI a socio-technical field of study, thus highlighting the need for AI researchers to work with experts from other disciplines, such as psychologists, sociologists, philosophers, and economists. The growing focus on emergent AI behaviors rather than on designed and validated properties of AI systems renders principled empirical evaluation more important than ever. Hence the need arises for well-designed benchmarks, test methodologies, and sound processes to infer conclusions from the results of computational experiments. The exponentially increasing quantity of AI research publications and the speed of AI innovation are testing the resilience of the peer-review system, with the immediate release of papers without peer-review evaluation having become widely accepted across many areas of AI research. Legacy and social media increasingly cover AI research advancements, often with contradictory statements that confuse the readers and blur the line between reality and perception of AI capabilities. All this is happening in a geo-political environment, in which companies and countries compete fiercely and globally to lead the AI race. This rivalry may impact access to research results and infrastructure as well as global governance efforts, underscoring the need for international cooperation in AI research and innovation.

In this overwhelming multi-dimensional and very dynamic scenario, it is important to be able to clearly identify the trajectory of AI research in a structured way. Such an effort can define the current trends and the research challenges still ahead of us to make AI more capable and reliable, so we can safely use it in mundane but also, most importantly, in high-stake scenarios.

This study aims to do this by including 17 topics related to AI research, covering most of the transformations mentioned above. Each chapter of the study is devoted to one of these topics, sketching its history, current trends and open challenges…(More)”.

Legitimacy: Working hypotheses


Report by TIAL: “Today more than ever, legitimacy is a vital resource for institutions seeking to lead and sustain impactful change. Yet, it can be elusive.

What does it truly mean for an institution to be legitimate? This publication delves into legitimacy as both a practical asset and a dynamic process, offering institutional entrepreneurs the tools to understand, build, and sustain it over time.

Legitimacy is not a static quality, nor is it purely theoretical. Instead, it’s grounded in the beliefs of those who interact with or are governed by an institution. These beliefs shape whether people view an institution’s authority as rightful and worth supporting. Drawing from social science research and real-world insights, this publication provides a framework to help institutional entrepreneurs address one of the most important challenges of institutional design: ensuring their legitimacy is sufficient to achieve their goals.

The paper emphasizes that legitimacy is relational and contextual. Institutions gain it through three primary sources: outcomes (delivering results), fairness (ensuring just processes), and correct procedures (following accepted norms). However, the need for legitimacy varies depending on the institution’s size, scope, and mission. For example, a body requiring elite approval may need less legitimacy than one relying on mass public trust.

Legitimacy is also dynamic—it ebbs and flows in response to external factors like competition, crises, and shifting societal narratives. Institutional entrepreneurs must anticipate these changes and actively manage their strategies for maintaining legitimacy. This publication highlights actionable steps for doing so, from framing mandates strategically to fostering public trust through transparency and communication.

By treating legitimacy as a resource that evolves over time, institutional entrepreneurs can ensure their institutions remain relevant, trusted, and effective in addressing pressing societal challenges.

Key takeaways

  • Legitimacy is the belief by an audience that an institution’s authority is rightful.
  • Institutions build legitimacy through outcomes, fairness, and correct procedures.
  • The need for legitimacy depends on an institution’s scope and mission.
  • Legitimacy is dynamic and shaped by external factors like crises and competition.
  • A portfolio approach to legitimacy—balancing outcomes, fairness, and procedure—is more resilient.
  • Institutional entrepreneurs must actively manage perceptions and adapt to changing contexts.
  • This publication offers practical frameworks to help institutional entrepreneurs build and sustain legitimacy…(More)”.

AI could supercharge human collective intelligence in everything from disaster relief to medical research


Article by Hao Cui and Taha Yasseri: “Imagine a large city recovering from a devastating hurricane. Roads are flooded, the power is down, and local authorities are overwhelmed. Emergency responders are doing their best, but the chaos is massive.

AI-controlled drones survey the damage from above, while intelligent systems process satellite images and data from sensors on the ground and air to identify which neighbourhoods are most vulnerable.

Meanwhile, AI-equipped robots are deployed to deliver food, water and medical supplies into areas that human responders can’t reach. Emergency teams, guided and coordinated by AI and the insights it produces, are able to prioritise their efforts, sending rescue squads where they’re needed most.

This is no longer the realm of science fiction. In a recent paper published in the journal Patterns, we argue that it’s an emerging and inevitable reality.

Collective intelligence is the shared intelligence of a group or groups of people working together. Different groups of people with diverse skills, such as firefighters and drone operators, for instance, work together to generate better ideas and solutions. AI can enhance this human collective intelligence, and transform how we approach large-scale crises. It’s a form of what’s called hybrid collective intelligence.

Instead of simply relying on human intuition or traditional tools, experts can use AI to process vast amounts of data, identify patterns and make predictions. By enhancing human decision-making, AI systems offer faster and more accurate insights – whether in medical research, disaster response, or environmental protection.

AI can do this, by for example, processing large datasets and uncovering insights that would take much longer for humans to identify. AI can also get involved in physical tasks. In manufacturing, AI-powered robots can automate assembly lines, helping improve efficiency and reduce downtime.

Equally crucial is information exchange, where AI enhances the flow of information, helping human teams coordinate more effectively and make data-driven decisions faster. Finally, AI can act as social catalysts to facilitate more effective collaboration within human teams or even help build hybrid teams of humans and machines working alongside one another…(More)”.

Redesigning Public Organizations: From “what” to “how


Essay by the Transition Collective: “Government organizations and their leaders are in a pinch. They are caught between pressures from politicians, citizens and increasingly complex external environments on the one hand — and from civil servants calling for new ways of working, thriving and belonging on the other hand. They have to enable meaningful, joined-up and efficient services for people, leveraging digital and physical resources, while building an attractive organizational culture. Indeed, the challenge is to build systems as human as the people they are intended to serve.

While this creates massive challenges for public sector organizations, this is also an opportunity to reimagine our institutions to meet the challenges of today and the future. To succeed, we must not only think about other models of organization — we also have to think of other ways of changing them.

Traditionally, we think of the organization as something static, a goal we arrive at or a fixed model we decide upon. If asked to describe their organization, most civil servants will point to an organigram — and more often than not it will consist of a number of boxes and lines, ordered in a hierarchy.

But in today’s world of complex challenges, accelerated frequency of change and dynamic interplay between the public sector and its surroundings, such a fixed model is less and less fit for the purposes it must fulfill. Not only does it not allow the collective intelligence and creativity of the organization’s members to be fully unleashed, it also does not allow for the speed and adaptability required by today’s turbulent environment. It does not allow for truly joined up, meaningful human services.

Unfreezing the organization

Rather than thinking mainly about models and forms, we should think of organizational design as an act or a series of actions. In other words, we should think about the organization not just as a what but also as a how: Less as a set of boxes describing a power hierarchy, and more as a set of living, organic roles and relationships. We need to thaw up our organizations from their frozen state — and keep them warmer and more fluid.

In this piece, we suggest that many efforts to reimagine public sector organizations have failed because the challenge of transforming an organization has been underestimated. We draw on concrete experiences from working with international and Danish public sector institutions, in particular in health and welfare services.

We propose a set of four approaches which, taken together, can support the work of redesigning organizations to be more ambitious, free, human, creative and self-managing — and thus better suited to meet the ever more complex challenges they are faced with…(More)”.

Bayes is not a phase


Blog by dynomight: “Because everyone uses Bayesian reasoning all the time, even if they don’t think of it that way. Arguably, we’re born Bayesian and do it instinctively. It’s normal and natural and—I daresay—almost boring. “Bayesian reasoning” is just a slight formalization of everyday thought.

It’s not a trend. It’s forever. But it’s forever like arithmetic is forever: Strange to be obsessed with it, but really strange to make fun of someone for using it.

Here, I’ll explain what Bayesian reasoning is, why it’s so fundamental, why people argue about it, and why much of that controversy is ultimately a boring semantic debate of no interest to an enlightened person like yourself. Then, for the haters, I’ll give some actually good reasons to be skeptical about how useful it is in practice.

I won’t use any equations. That’s not because I don’t think you can take it, but Bayesian reasoning isn’t math. It’s a concept. The typical explanations use lots of math and kind of gesture around the concept, but never seem to get to the core of it, which I think leads people to miss the forest for the trees…(More)”.

Emerging Practices in Participatory AI Design in Public Sector Innovation


Paper by Devansh Saxena, et al: “Local and federal agencies are rapidly adopting AI systems to augment or automate critical decisions, efficiently use resources, and improve public service delivery. AI systems are being used to support tasks associated with urban planning, security, surveillance, energy and critical infrastructure, and support decisions that directly affect citizens and their ability to access essential services. Local governments act as the governance tier closest to citizens and must play a critical role in upholding democratic values and building community trust especially as it relates to smart city initiatives that seek to transform public services through the adoption of AI. Community-centered and participatory approaches have been central for ensuring the appropriate adoption of technology; however, AI innovation introduces new challenges in this context because participatory AI design methods require more robust formulation and face higher standards for implementation in the public sector compared to the private sector. This requires us to reassess traditional methods used in this space as well as develop new resources and methods. This workshop will explore emerging practices in participatory algorithm design – or the use of public participation and community engagement – in the scoping, design, adoption, and implementation of public sector algorithms…(More)”.

Data equity and official statistics in the age of private sector data proliferation


Paper by Pietro Gennari: “Over the last few years, the private sector has become a primary generator of data due to widespread digitisation of the economy and society, the use of social media platforms, and advancements of technologies like the Internet of Things and AI. Unlike traditional sources, these new data streams often offer real-time information and unique insights into people’s behaviour, social dynamics, and economic trends. However, the proprietary nature of most private sector data presents challenges for public access, transparency, and governance that have led to fragmented, often conflicting, data governance arrangements worldwide. This lack of coherence can exacerbate inequalities, limit data access, and restrict data’s utility as a global asset.

Within this context, data equity has emerged as one of the key principles at the basis of any proposal of new data governance framework. The term “data equity” refers to the fair and inclusive access, use, and distribution of data so that it benefits all sections of society, regardless of socioeconomic status, race, or geographic location. It involves making sure that the collection, processing, and use of data does not disproportionately benefit or harm any particular group and seeks to address disparities in data access and quality that can perpetuate social and economic inequalities. This is important because data systems significantly influence access to resources and opportunities in society. In this sense, data equity aims to correct imbalances that have historically affected various groups and to ensure that decision-making based on data does not perpetuate these inequities…(More)”.

When forecasting and foresight meet data and innovation: toward a taxonomy of anticipatory methods for migration policy


Paper by Sara Marcucci, Stefaan Verhulst and María Esther Cervantes: “The various global refugee and migration events of the last few years underscore the need for advancing anticipatory strategies in migration policy. The struggle to manage large inflows (or outflows) highlights the demand for proactive measures based on a sense of the future. Anticipatory methods, ranging from predictive models to foresight techniques, emerge as valuable tools for policymakers. These methods, now bolstered by advancements in technology and leveraging nontraditional data sources, can offer a pathway to develop more precise, responsive, and forward-thinking policies.

This paper seeks to map out the rapidly evolving domain of anticipatory methods in the realm of migration policy, capturing the trend toward integrating quantitative and qualitative methodologies and harnessing novel tools and data. It introduces a new taxonomy designed to organize these methods into three core categories: Experience-based, Exploration-based, and Expertise-based. This classification aims to guide policymakers in selecting the most suitable methods for specific contexts or questions, thereby enhancing migration policies…(More)”