Commerce Secretary’s Comments Raise Fears of Interference in Federal Data


Article by Ben Casselman and Colby Smith: “Comments from a member of President Trump’s cabinet over the weekend have renewed concerns that the new administration could seek to interfere with federal statistics — especially if they start to show that the economy is slipping into a recession.

In an interview on Fox News on Sunday, Howard Lutnick, the commerce secretary, suggested that he planned to change the way the government reports data on gross domestic product in order to remove the impact of government spending.

“You know that governments historically have messed with G.D.P.,” he said. “They count government spending as part of G.D.P. So I’m going to separate those two and make it transparent.”

It wasn’t immediately clear what Mr. Lutnick meant. The basic definition of gross domestic product is widely accepted internationally and has been unchanged for decades. It tallies consumer spending, private-sector investment, net exports, and government investment and spending to arrive at a broad measure of all goods and services produced in a country.The Bureau of Economic Analysis, which is part of Mr. Lutnick’s department, already produces a detailed breakdown of G.D.P. into its component parts. Many economists focus on a measure — known as “final sales to private domestic purchasers” — that excludes government spending and is often seen as a better indicator of underlying demand in the economy. That measure has generally shown stronger growth in recent quarters than overall G.D.P. figures.

In recent weeks, however, there have been mounting signs elsewhere that the economy could be losing momentumConsumer spending fell unexpectedly in January, applications for unemployment insurance have been creeping upward, and measures of housing construction and home sales have turned down. A forecasting model from the Federal Reserve Bank of Atlanta predicts that G.D.P. could contract sharply in the first quarter of the year, although most private forecasters still expect modest growth.

Cuts to federal spending and the federal work force could act as a further drag on economic growth in coming months. Removing federal spending from G.D.P. calculations, therefore, could obscure the impact of the administration’s policies…(More)”.

Citizen participation and technology: lessons from the fields of deliberative democracy and science and technology studies


Paper by Julian “Iñaki” Goñi: “Calls for democratising technology are pervasive in current technological discourse. Indeed, participating publics have been mobilised as a core normative aspiration in Science and Technology Studies (STS), driven by a critical examination of “expertise”. In a sense, democratic deliberation became the answer to the question of responsible technological governance, and science and technology communication. On the other hand, calls for technifying democracy are ever more pervasive in deliberative democracy’s discourse. Many new digital tools (“civic technologies”) are shaping democratic practice while navigating a complex political economy. Moreover, Natural Language Processing and AI are providing novel alternatives for systematising large-scale participation, automated moderation and setting up participation. In a sense, emerging digital technologies became the answer to the question of how to augment collective intelligence and reconnect deliberation to mass politics. In this paper, I explore the mutual shaping of (deliberative) democracy and technology (studies), highlighting that without careful consideration, both disciplines risk being reduced to superficial symbols in discourses inclined towards quick solutionism. This analysis highlights the current disconnect between Deliberative Democracy and STS, exploring the potential benefits of fostering closer links between the two fields. Drawing on STS insights, the paper argues that deliberative democracy could be enriched by a deeper engagement with the material aspects of democratic processes, the evolving nature of civic technologies through use, and a more critical approach to expertise. It also suggests that STS scholars would benefit from engaging more closely with democratic theory, which could enhance their analysis of public participation, bridge the gap between descriptive richness and normative relevance, and offer a more nuanced understanding of the inner functioning of political systems and politics in contemporary democracies…(More)”.

Future of AI Research


Report by the Association for the Advancement of Artificial Intelligence:  “As AI capabilities evolve rapidly, AI research is also undergoing a fast and significant transformation along many dimensions, including its topics, its methods, the research community, and the working environment. Topics such as AI reasoning and agentic AI have been studied for decades but now have an expanded scope in light of current AI capabilities and limitations. AI ethics and safety, AI for social good, and sustainable AI have become central themes in all major AI conferences. Moreover, research on AI algorithms and software systems is becoming increasingly tied to substantial amounts of dedicated AI hardware, notably GPUs, which leads to AI architecture co-creation, in a way that is more prominent now than over the last 3 decades. Related to this shift, more and more AI researchers work in corporate environments, where the necessary hardware and other resources are more easily available, compared to academia, questioning the roles of academic AI research, student retention, and faculty recruiting. The pervasive use of AI in our daily lives and its impact on people, society, and the environment makes AI a socio-technical field of study, thus highlighting the need for AI researchers to work with experts from other disciplines, such as psychologists, sociologists, philosophers, and economists. The growing focus on emergent AI behaviors rather than on designed and validated properties of AI systems renders principled empirical evaluation more important than ever. Hence the need arises for well-designed benchmarks, test methodologies, and sound processes to infer conclusions from the results of computational experiments. The exponentially increasing quantity of AI research publications and the speed of AI innovation are testing the resilience of the peer-review system, with the immediate release of papers without peer-review evaluation having become widely accepted across many areas of AI research. Legacy and social media increasingly cover AI research advancements, often with contradictory statements that confuse the readers and blur the line between reality and perception of AI capabilities. All this is happening in a geo-political environment, in which companies and countries compete fiercely and globally to lead the AI race. This rivalry may impact access to research results and infrastructure as well as global governance efforts, underscoring the need for international cooperation in AI research and innovation.

In this overwhelming multi-dimensional and very dynamic scenario, it is important to be able to clearly identify the trajectory of AI research in a structured way. Such an effort can define the current trends and the research challenges still ahead of us to make AI more capable and reliable, so we can safely use it in mundane but also, most importantly, in high-stake scenarios.

This study aims to do this by including 17 topics related to AI research, covering most of the transformations mentioned above. Each chapter of the study is devoted to one of these topics, sketching its history, current trends and open challenges…(More)”.

Legitimacy: Working hypotheses


Report by TIAL: “Today more than ever, legitimacy is a vital resource for institutions seeking to lead and sustain impactful change. Yet, it can be elusive.

What does it truly mean for an institution to be legitimate? This publication delves into legitimacy as both a practical asset and a dynamic process, offering institutional entrepreneurs the tools to understand, build, and sustain it over time.

Legitimacy is not a static quality, nor is it purely theoretical. Instead, it’s grounded in the beliefs of those who interact with or are governed by an institution. These beliefs shape whether people view an institution’s authority as rightful and worth supporting. Drawing from social science research and real-world insights, this publication provides a framework to help institutional entrepreneurs address one of the most important challenges of institutional design: ensuring their legitimacy is sufficient to achieve their goals.

The paper emphasizes that legitimacy is relational and contextual. Institutions gain it through three primary sources: outcomes (delivering results), fairness (ensuring just processes), and correct procedures (following accepted norms). However, the need for legitimacy varies depending on the institution’s size, scope, and mission. For example, a body requiring elite approval may need less legitimacy than one relying on mass public trust.

Legitimacy is also dynamic—it ebbs and flows in response to external factors like competition, crises, and shifting societal narratives. Institutional entrepreneurs must anticipate these changes and actively manage their strategies for maintaining legitimacy. This publication highlights actionable steps for doing so, from framing mandates strategically to fostering public trust through transparency and communication.

By treating legitimacy as a resource that evolves over time, institutional entrepreneurs can ensure their institutions remain relevant, trusted, and effective in addressing pressing societal challenges.

Key takeaways

  • Legitimacy is the belief by an audience that an institution’s authority is rightful.
  • Institutions build legitimacy through outcomes, fairness, and correct procedures.
  • The need for legitimacy depends on an institution’s scope and mission.
  • Legitimacy is dynamic and shaped by external factors like crises and competition.
  • A portfolio approach to legitimacy—balancing outcomes, fairness, and procedure—is more resilient.
  • Institutional entrepreneurs must actively manage perceptions and adapt to changing contexts.
  • This publication offers practical frameworks to help institutional entrepreneurs build and sustain legitimacy…(More)”.

AI could supercharge human collective intelligence in everything from disaster relief to medical research


Article by Hao Cui and Taha Yasseri: “Imagine a large city recovering from a devastating hurricane. Roads are flooded, the power is down, and local authorities are overwhelmed. Emergency responders are doing their best, but the chaos is massive.

AI-controlled drones survey the damage from above, while intelligent systems process satellite images and data from sensors on the ground and air to identify which neighbourhoods are most vulnerable.

Meanwhile, AI-equipped robots are deployed to deliver food, water and medical supplies into areas that human responders can’t reach. Emergency teams, guided and coordinated by AI and the insights it produces, are able to prioritise their efforts, sending rescue squads where they’re needed most.

This is no longer the realm of science fiction. In a recent paper published in the journal Patterns, we argue that it’s an emerging and inevitable reality.

Collective intelligence is the shared intelligence of a group or groups of people working together. Different groups of people with diverse skills, such as firefighters and drone operators, for instance, work together to generate better ideas and solutions. AI can enhance this human collective intelligence, and transform how we approach large-scale crises. It’s a form of what’s called hybrid collective intelligence.

Instead of simply relying on human intuition or traditional tools, experts can use AI to process vast amounts of data, identify patterns and make predictions. By enhancing human decision-making, AI systems offer faster and more accurate insights – whether in medical research, disaster response, or environmental protection.

AI can do this, by for example, processing large datasets and uncovering insights that would take much longer for humans to identify. AI can also get involved in physical tasks. In manufacturing, AI-powered robots can automate assembly lines, helping improve efficiency and reduce downtime.

Equally crucial is information exchange, where AI enhances the flow of information, helping human teams coordinate more effectively and make data-driven decisions faster. Finally, AI can act as social catalysts to facilitate more effective collaboration within human teams or even help build hybrid teams of humans and machines working alongside one another…(More)”.

China wants tech companies to monetize data, but few are buying in


Article by Lizzi C. Lee: “Chinese firms generate staggering amounts of data daily, from ride-hailing trips to online shopping transactions. A recent policy allowed Chinese companies to record data as assets on their balance sheets, the first such regulation in the world, paving the way for data to be traded in a marketplace and boost company valuations. 

But uptake has been slow. When China Unicom, one of the world’s largest mobile operators, reported its earnings recently, eagle-eyed accountants spotted that the company had listed 204 million yuan ($28 million) in data assets on its balance sheet. The state-owned operator was the first Chinese tech giant to take advantage of the Ministry of Finance’s new corporate data policy, which permits companies to classify data as inventory or intangible assets. 

“No other country is trying to do this on a national level. It could drive global standards of data management and accounting,” Ran Guo, an affiliated researcher at the Asia Society Policy Institute specializing in data governance in China, told Rest of World. 

In 2023 alone, China generated 32.85 zettabytes — more than 27% of the global total, according to a government survey. To put that in perspective, storing this volume on standard 1-terabyte hard drives would require more than 32 billion units….Tech companies that are data-rich are well-positioned tobenefit from logging data as assets to turn the formalized assets into tradable commodities, said Guo. But companies must first invest in secure storage and show that the data is legally obtained in order to meet strict government rules on data security. 

“This can be costly and complex,” he said. “Not all data qualifies as an asset, and companies must meet stringent requirements.” 

Even China Unicom, a state-owned enterprise, is likely complying with the new policy due to political pressure rather than economic incentive, said Guo, who conducted field research in China last year on the government push for data resource development. The telecom operator did not respond to a request for comment. 

Private technology companies in China, meanwhile, tend to be protective of their data. A Chinese government statement in 2022 pushed private enterprises to “open up their data.” But smaller firms could lack the resources to meet the stringent data storage and consumer protection standards, experts and Chinese tech company employees told Rest of World...(More)”.

Redesigning Public Organizations: From “what” to “how


Essay by the Transition Collective: “Government organizations and their leaders are in a pinch. They are caught between pressures from politicians, citizens and increasingly complex external environments on the one hand — and from civil servants calling for new ways of working, thriving and belonging on the other hand. They have to enable meaningful, joined-up and efficient services for people, leveraging digital and physical resources, while building an attractive organizational culture. Indeed, the challenge is to build systems as human as the people they are intended to serve.

While this creates massive challenges for public sector organizations, this is also an opportunity to reimagine our institutions to meet the challenges of today and the future. To succeed, we must not only think about other models of organization — we also have to think of other ways of changing them.

Traditionally, we think of the organization as something static, a goal we arrive at or a fixed model we decide upon. If asked to describe their organization, most civil servants will point to an organigram — and more often than not it will consist of a number of boxes and lines, ordered in a hierarchy.

But in today’s world of complex challenges, accelerated frequency of change and dynamic interplay between the public sector and its surroundings, such a fixed model is less and less fit for the purposes it must fulfill. Not only does it not allow the collective intelligence and creativity of the organization’s members to be fully unleashed, it also does not allow for the speed and adaptability required by today’s turbulent environment. It does not allow for truly joined up, meaningful human services.

Unfreezing the organization

Rather than thinking mainly about models and forms, we should think of organizational design as an act or a series of actions. In other words, we should think about the organization not just as a what but also as a how: Less as a set of boxes describing a power hierarchy, and more as a set of living, organic roles and relationships. We need to thaw up our organizations from their frozen state — and keep them warmer and more fluid.

In this piece, we suggest that many efforts to reimagine public sector organizations have failed because the challenge of transforming an organization has been underestimated. We draw on concrete experiences from working with international and Danish public sector institutions, in particular in health and welfare services.

We propose a set of four approaches which, taken together, can support the work of redesigning organizations to be more ambitious, free, human, creative and self-managing — and thus better suited to meet the ever more complex challenges they are faced with…(More)”.

Bayes is not a phase


Blog by dynomight: “Because everyone uses Bayesian reasoning all the time, even if they don’t think of it that way. Arguably, we’re born Bayesian and do it instinctively. It’s normal and natural and—I daresay—almost boring. “Bayesian reasoning” is just a slight formalization of everyday thought.

It’s not a trend. It’s forever. But it’s forever like arithmetic is forever: Strange to be obsessed with it, but really strange to make fun of someone for using it.

Here, I’ll explain what Bayesian reasoning is, why it’s so fundamental, why people argue about it, and why much of that controversy is ultimately a boring semantic debate of no interest to an enlightened person like yourself. Then, for the haters, I’ll give some actually good reasons to be skeptical about how useful it is in practice.

I won’t use any equations. That’s not because I don’t think you can take it, but Bayesian reasoning isn’t math. It’s a concept. The typical explanations use lots of math and kind of gesture around the concept, but never seem to get to the core of it, which I think leads people to miss the forest for the trees…(More)”.

Open Data Under Attack: How to Find Data and Why It Is More Important Than Ever


Article by Jessica Hilburn: “This land was made for you and me, and so was the data collected with our taxpayer dollars. Open data is data that is accessible, shareable, and able to be used by anyone. While any person, company, or organization can create and publish open data, the federal and state governments are by far the largest providers of open data.

President Barack Obama codified the importance of government-created open data in his May 9, 2013, executive order as a part of the Open Government Initiative. This initiative was meant to “ensure the public trust and establish a system of transparency, public participation, and collaboration” in furtherance of strengthening democracy and increasing efficiency. The initiative also launched Project Open Data (since replaced by the Resources.data.gov platform), which documented best practices and offered tools so government agencies in every sector could open their data and contribute to the collective public good. As has been made readily apparent, the era of public good through open data is now under attack.

Immediately after his inauguration, President Donald Trump signed a slew of executive orders, many of which targeted diversity, equity, and inclusion (DEI) for removal in federal government operations. Unsurprisingly, a large number of federal datasets include information dealing with diverse populations, equitable services, and inclusion of marginalized groups. Other datasets deal with information on topics targeted by those with nefarious agendas—vaccination rates, HIV/AIDS, and global warming, just to name a few. In the wake of these executive orders, datasets and website pages with blacklisted topics, tags, or keywords suddenly disappeared—more than 8,000 of them. In addition, President Trump fired the National Archivist, and top National Archives and Records Administration officials are being ousted, putting the future of our collective history at enormous risk.

While it is common practice to archive websites and information in the transition between administrations, it is unprecedented for the incoming administration to cull data altogether. In response, unaffiliated organizations are ramping up efforts to separately archive data and information for future preservation and access. Web scrapers are being used to grab as much data as possible, but since this method is automated, data requiring a login or bot challenger (like a captcha) is left behind. The future information gap that researchers will be left to grapple with could be catastrophic for progress in crucial areas, including weather, natural disasters, and public health. Though there are efforts to put out the fire, such as the federal order to restore certain resources, the people’s library is burning. The losses will be permanently felt…Data is a weapon, whether we like it or not. Free and open access to information—about democracy, history, our communities, and even ourselves—is the foundation of library service. It is time for anyone who continues to claim that libraries are not political to wake up before it is too late. Are libraries still not political when the Pentagon barred library access for tens of thousands of American children attending Pentagon schools on military bases while they examined and removed supposed “radical indoctrination” books? Are libraries still not political when more than 1,000 unique titles are being targeted for censorship annually, and soft censorship through preemptive restriction to avoid controversy is surely occurring and impossible to track? It is time for librarians and library workers to embrace being political.

In a country where the federal government now denies that certain people even exist, claims that children are being indoctrinated because they are being taught the good and bad of our nation’s history, and rescinds support for the arts, humanities, museums, and libraries, there is no such thing as neutrality. When compassion and inclusion are labeled the enemy and the diversity created by our great American experiment is lambasted as a social ill, claiming that libraries are neutral or apolitical is not only incorrect, it’s complicit. To update the quote, information is the weapon in the war of ideas. Librarians are the stewards of information. We don’t want to be the Americans who protested in 1933 at the first Nazi book burnings and then, despite seeing the early warning signs of catastrophe, retreated into the isolation of their own concerns. The people’s library is on fire. We must react before all that is left of our profession is ash…(More)”.

Emerging Practices in Participatory AI Design in Public Sector Innovation


Paper by Devansh Saxena, et al: “Local and federal agencies are rapidly adopting AI systems to augment or automate critical decisions, efficiently use resources, and improve public service delivery. AI systems are being used to support tasks associated with urban planning, security, surveillance, energy and critical infrastructure, and support decisions that directly affect citizens and their ability to access essential services. Local governments act as the governance tier closest to citizens and must play a critical role in upholding democratic values and building community trust especially as it relates to smart city initiatives that seek to transform public services through the adoption of AI. Community-centered and participatory approaches have been central for ensuring the appropriate adoption of technology; however, AI innovation introduces new challenges in this context because participatory AI design methods require more robust formulation and face higher standards for implementation in the public sector compared to the private sector. This requires us to reassess traditional methods used in this space as well as develop new resources and methods. This workshop will explore emerging practices in participatory algorithm design – or the use of public participation and community engagement – in the scoping, design, adoption, and implementation of public sector algorithms…(More)”.