Future of AI Research


Report by the Association for the Advancement of Artificial Intelligence:  “As AI capabilities evolve rapidly, AI research is also undergoing a fast and significant transformation along many dimensions, including its topics, its methods, the research community, and the working environment. Topics such as AI reasoning and agentic AI have been studied for decades but now have an expanded scope in light of current AI capabilities and limitations. AI ethics and safety, AI for social good, and sustainable AI have become central themes in all major AI conferences. Moreover, research on AI algorithms and software systems is becoming increasingly tied to substantial amounts of dedicated AI hardware, notably GPUs, which leads to AI architecture co-creation, in a way that is more prominent now than over the last 3 decades. Related to this shift, more and more AI researchers work in corporate environments, where the necessary hardware and other resources are more easily available, compared to academia, questioning the roles of academic AI research, student retention, and faculty recruiting. The pervasive use of AI in our daily lives and its impact on people, society, and the environment makes AI a socio-technical field of study, thus highlighting the need for AI researchers to work with experts from other disciplines, such as psychologists, sociologists, philosophers, and economists. The growing focus on emergent AI behaviors rather than on designed and validated properties of AI systems renders principled empirical evaluation more important than ever. Hence the need arises for well-designed benchmarks, test methodologies, and sound processes to infer conclusions from the results of computational experiments. The exponentially increasing quantity of AI research publications and the speed of AI innovation are testing the resilience of the peer-review system, with the immediate release of papers without peer-review evaluation having become widely accepted across many areas of AI research. Legacy and social media increasingly cover AI research advancements, often with contradictory statements that confuse the readers and blur the line between reality and perception of AI capabilities. All this is happening in a geo-political environment, in which companies and countries compete fiercely and globally to lead the AI race. This rivalry may impact access to research results and infrastructure as well as global governance efforts, underscoring the need for international cooperation in AI research and innovation.

In this overwhelming multi-dimensional and very dynamic scenario, it is important to be able to clearly identify the trajectory of AI research in a structured way. Such an effort can define the current trends and the research challenges still ahead of us to make AI more capable and reliable, so we can safely use it in mundane but also, most importantly, in high-stake scenarios.

This study aims to do this by including 17 topics related to AI research, covering most of the transformations mentioned above. Each chapter of the study is devoted to one of these topics, sketching its history, current trends and open challenges…(More)”.

AI could supercharge human collective intelligence in everything from disaster relief to medical research


Article by Hao Cui and Taha Yasseri: “Imagine a large city recovering from a devastating hurricane. Roads are flooded, the power is down, and local authorities are overwhelmed. Emergency responders are doing their best, but the chaos is massive.

AI-controlled drones survey the damage from above, while intelligent systems process satellite images and data from sensors on the ground and air to identify which neighbourhoods are most vulnerable.

Meanwhile, AI-equipped robots are deployed to deliver food, water and medical supplies into areas that human responders can’t reach. Emergency teams, guided and coordinated by AI and the insights it produces, are able to prioritise their efforts, sending rescue squads where they’re needed most.

This is no longer the realm of science fiction. In a recent paper published in the journal Patterns, we argue that it’s an emerging and inevitable reality.

Collective intelligence is the shared intelligence of a group or groups of people working together. Different groups of people with diverse skills, such as firefighters and drone operators, for instance, work together to generate better ideas and solutions. AI can enhance this human collective intelligence, and transform how we approach large-scale crises. It’s a form of what’s called hybrid collective intelligence.

Instead of simply relying on human intuition or traditional tools, experts can use AI to process vast amounts of data, identify patterns and make predictions. By enhancing human decision-making, AI systems offer faster and more accurate insights – whether in medical research, disaster response, or environmental protection.

AI can do this, by for example, processing large datasets and uncovering insights that would take much longer for humans to identify. AI can also get involved in physical tasks. In manufacturing, AI-powered robots can automate assembly lines, helping improve efficiency and reduce downtime.

Equally crucial is information exchange, where AI enhances the flow of information, helping human teams coordinate more effectively and make data-driven decisions faster. Finally, AI can act as social catalysts to facilitate more effective collaboration within human teams or even help build hybrid teams of humans and machines working alongside one another…(More)”.

China wants tech companies to monetize data, but few are buying in


Article by Lizzi C. Lee: “Chinese firms generate staggering amounts of data daily, from ride-hailing trips to online shopping transactions. A recent policy allowed Chinese companies to record data as assets on their balance sheets, the first such regulation in the world, paving the way for data to be traded in a marketplace and boost company valuations. 

But uptake has been slow. When China Unicom, one of the world’s largest mobile operators, reported its earnings recently, eagle-eyed accountants spotted that the company had listed 204 million yuan ($28 million) in data assets on its balance sheet. The state-owned operator was the first Chinese tech giant to take advantage of the Ministry of Finance’s new corporate data policy, which permits companies to classify data as inventory or intangible assets. 

“No other country is trying to do this on a national level. It could drive global standards of data management and accounting,” Ran Guo, an affiliated researcher at the Asia Society Policy Institute specializing in data governance in China, told Rest of World. 

In 2023 alone, China generated 32.85 zettabytes — more than 27% of the global total, according to a government survey. To put that in perspective, storing this volume on standard 1-terabyte hard drives would require more than 32 billion units….Tech companies that are data-rich are well-positioned tobenefit from logging data as assets to turn the formalized assets into tradable commodities, said Guo. But companies must first invest in secure storage and show that the data is legally obtained in order to meet strict government rules on data security. 

“This can be costly and complex,” he said. “Not all data qualifies as an asset, and companies must meet stringent requirements.” 

Even China Unicom, a state-owned enterprise, is likely complying with the new policy due to political pressure rather than economic incentive, said Guo, who conducted field research in China last year on the government push for data resource development. The telecom operator did not respond to a request for comment. 

Private technology companies in China, meanwhile, tend to be protective of their data. A Chinese government statement in 2022 pushed private enterprises to “open up their data.” But smaller firms could lack the resources to meet the stringent data storage and consumer protection standards, experts and Chinese tech company employees told Rest of World...(More)”.

Open Data Under Attack: How to Find Data and Why It Is More Important Than Ever


Article by Jessica Hilburn: “This land was made for you and me, and so was the data collected with our taxpayer dollars. Open data is data that is accessible, shareable, and able to be used by anyone. While any person, company, or organization can create and publish open data, the federal and state governments are by far the largest providers of open data.

President Barack Obama codified the importance of government-created open data in his May 9, 2013, executive order as a part of the Open Government Initiative. This initiative was meant to “ensure the public trust and establish a system of transparency, public participation, and collaboration” in furtherance of strengthening democracy and increasing efficiency. The initiative also launched Project Open Data (since replaced by the Resources.data.gov platform), which documented best practices and offered tools so government agencies in every sector could open their data and contribute to the collective public good. As has been made readily apparent, the era of public good through open data is now under attack.

Immediately after his inauguration, President Donald Trump signed a slew of executive orders, many of which targeted diversity, equity, and inclusion (DEI) for removal in federal government operations. Unsurprisingly, a large number of federal datasets include information dealing with diverse populations, equitable services, and inclusion of marginalized groups. Other datasets deal with information on topics targeted by those with nefarious agendas—vaccination rates, HIV/AIDS, and global warming, just to name a few. In the wake of these executive orders, datasets and website pages with blacklisted topics, tags, or keywords suddenly disappeared—more than 8,000 of them. In addition, President Trump fired the National Archivist, and top National Archives and Records Administration officials are being ousted, putting the future of our collective history at enormous risk.

While it is common practice to archive websites and information in the transition between administrations, it is unprecedented for the incoming administration to cull data altogether. In response, unaffiliated organizations are ramping up efforts to separately archive data and information for future preservation and access. Web scrapers are being used to grab as much data as possible, but since this method is automated, data requiring a login or bot challenger (like a captcha) is left behind. The future information gap that researchers will be left to grapple with could be catastrophic for progress in crucial areas, including weather, natural disasters, and public health. Though there are efforts to put out the fire, such as the federal order to restore certain resources, the people’s library is burning. The losses will be permanently felt…Data is a weapon, whether we like it or not. Free and open access to information—about democracy, history, our communities, and even ourselves—is the foundation of library service. It is time for anyone who continues to claim that libraries are not political to wake up before it is too late. Are libraries still not political when the Pentagon barred library access for tens of thousands of American children attending Pentagon schools on military bases while they examined and removed supposed “radical indoctrination” books? Are libraries still not political when more than 1,000 unique titles are being targeted for censorship annually, and soft censorship through preemptive restriction to avoid controversy is surely occurring and impossible to track? It is time for librarians and library workers to embrace being political.

In a country where the federal government now denies that certain people even exist, claims that children are being indoctrinated because they are being taught the good and bad of our nation’s history, and rescinds support for the arts, humanities, museums, and libraries, there is no such thing as neutrality. When compassion and inclusion are labeled the enemy and the diversity created by our great American experiment is lambasted as a social ill, claiming that libraries are neutral or apolitical is not only incorrect, it’s complicit. To update the quote, information is the weapon in the war of ideas. Librarians are the stewards of information. We don’t want to be the Americans who protested in 1933 at the first Nazi book burnings and then, despite seeing the early warning signs of catastrophe, retreated into the isolation of their own concerns. The people’s library is on fire. We must react before all that is left of our profession is ash…(More)”.

Data equity and official statistics in the age of private sector data proliferation


Paper by Pietro Gennari: “Over the last few years, the private sector has become a primary generator of data due to widespread digitisation of the economy and society, the use of social media platforms, and advancements of technologies like the Internet of Things and AI. Unlike traditional sources, these new data streams often offer real-time information and unique insights into people’s behaviour, social dynamics, and economic trends. However, the proprietary nature of most private sector data presents challenges for public access, transparency, and governance that have led to fragmented, often conflicting, data governance arrangements worldwide. This lack of coherence can exacerbate inequalities, limit data access, and restrict data’s utility as a global asset.

Within this context, data equity has emerged as one of the key principles at the basis of any proposal of new data governance framework. The term “data equity” refers to the fair and inclusive access, use, and distribution of data so that it benefits all sections of society, regardless of socioeconomic status, race, or geographic location. It involves making sure that the collection, processing, and use of data does not disproportionately benefit or harm any particular group and seeks to address disparities in data access and quality that can perpetuate social and economic inequalities. This is important because data systems significantly influence access to resources and opportunities in society. In this sense, data equity aims to correct imbalances that have historically affected various groups and to ensure that decision-making based on data does not perpetuate these inequities…(More)”.

The Data Innovation Toolkit


Toolkit by Maria Claudia Bodino, Nathan da Silva Carvalho, Marcelo Cogo, Arianna Dafne Fini Storchi, and Stefaan Verhulst: “Despite the abundance of data, the excitement around AI, and the potential for transformative insights, many public administrations struggle to translate data into actionable strategies and innovations. 

Public servants working with data-related initiatives, need practical, easy-to-use resources designed to enhance the management of data innovation initiatives. 

In order to address these needs, the iLab of DG DIGIT from the European Commission is developing an initial set of practical tools designed to facilitate and enhance the implementation of data-driven initiatives. The main building blocks of the first version of the of the Digital Innovation Toolkit include: 

  1. Repository of educational materials and resources on the latest data innovation approaches from public sector, academia, NGOs and think tanks 
  2. An initial set of practical resources, some examples: 
  3. Workshop Templates to offer structured formats for conducting productive workshops that foster collaboration, ideation, and problem-solving. 
  4. Checklists to ensure that all data journey aspects and steps are properly assessed. 
  5. Interactive Exercises to engage team members in hands-on activities that build skills and facilitate understanding of key concepts and methodologies. 
  6. Canvas Models to provide visual frameworks for planning and brainstorming….(More)”.

How tax data unlocks new insights for industrial policy


OECD article: “Value-added tax (VAT) is a consumption tax applied at each stage of the supply chain whenever value is added to goods or services. Businesses collect and remit VAT. The VAT data that are collected represent a breakthrough in studying production networks because they capture actual transactions between firms at an unprecedented level of detail. Unlike traditional business surveys or administrative data that might tell us about a firm’s size or industry, VAT records show us who does business with whom and for how much.

This data is particularly valuable because of its comprehensive coverage. In Estonia, for example, all VAT-registered businesses must report transactions above €1,000 per month, creating an almost complete picture of significant business relationships in the economy.

At least 15 countries now have such data available, including Belgium, Chile, Costa Rica, Estonia, and Italy. This growing availability creates opportunities for cross-country comparison and broader economic insights…(More)”.

Farmers Sue Over Deletion of Climate Data From Government Websites


Article by Karen Zraick: “Organic farmers and environmental groups sued the Agriculture Department on Monday over its scrubbing of references to climate change from its website.

The department had ordered staff to take down pages focused on climate change on Jan. 30, according to the suit, which was filed in the United States District Court for the Southern District of New York. Within hours, it said, information started disappearing.

That included websites containing data sets, interactive tools and funding information that farmers and researchers relied on for planning and adaptation projects, according to the lawsuit.

At the same time, the department also froze funding that had been promised to businesses and nonprofits through conservation and climate programs. The purge then “removed critical information about these programs from the public record, denying farmers access to resources they need to advocate for funds they are owed,” it said.

The Agriculture Department referred questions about the lawsuit to the Justice Department, which did not immediately respond to a request for comment.

The suit was filed by lawyers from Earthjustice, based in San Francisco, and the Knight First Amendment Institute at Columbia University, on behalf of the Northeast Organic Farming Association of New York, based in Binghamton; the Natural Resources Defense Council, based in New York; and the Environmental Working Group, based in Washington. The latter two groups relied on the department website for their research and advocacy, the lawsuit said.

Peter Lehner, a lawyer for Earthjustice, said the pages being purged were crucial for farmers facing risks linked to climate change, including heat waves, droughts, floods, extreme weather and wildfires. The websites had contained information about how to mitigate dangers and adopt new agricultural techniques and strategies. Long-term weather data and trends are valuable in the agriculture industry for planning, research and business strategy.

“You can purge a website of the words climate change, but that doesn’t mean climate change goes away,” Mr. Lehner said…(More)”.

Governing in the Age of AI: Building Britain’s National Data Library


Report by the Tony Blair Institute for Global Change: “The United Kingdom should lead the world in artificial-intelligence-driven innovation, research and data-enabled public services. It has the data, the institutions and the expertise to set the global standard. But without the right infrastructure, these advantages are being wasted.

The UK’s data infrastructure, like that of every nation, is built around outdated assumptions about how data create value. It is fragmented and unfit for purpose. Public-sector data are locked in silos, access is slow and inconsistent, and there is no system to connect and use these data effectively, or any framework for deciding what additional data would be most valuable to collect given AI’s capabilities.

As a result, research is stalled, AI adoption is held back, and the government struggles to plan services, target support and respond to emerging challenges. This affects everything from developing new treatments to improving transport, tackling crime and ensuring economic policies help those who need them. While some countries are making progress in treating existing data as strategic assets, none have truly reimagined data infrastructure for an AI-enabled future…(More)”

On the Shoulders of Others: The Importance of Regulatory Learning in the Age of AI


Paper by Urs Gasser and Viktor Mayer-Schonberger: “…International harmonization of regulation is the right strategy when the appropriate regulatory ends and means are sufficiently clear to reap efficiencies of scale and scope. When this is not the case, a push for efficiency through uniformity is premature and may lead to a suboptimal regulatory lock-in: the establishment of a rule framework that is either inefficient in the use of its means to reach the intended goal, or furthers the wrong goal, or both.


A century ago, economist Joseph Schumpeter suggested that companies have two distinct strategies to achieve success. The first is to employ economies of scale and scope to lower their cost. It’s essentially a push for improved efficiency. The other strategy is to invent a new product (or production process) that may not, at least initially, be hugely efficient, but is nevertheless advantageous because demand for the new product is price inelastic. For Schumpeter this was the essence of innovation. But, as Schumpeter also argued, innovation is not a simple, linear, and predictable process. Often, it happens in fits and starts, and can’t be easily commandeered or engineered.


As innovation is hard to foresee and plan, the best way to facilitate it is to enable a wide variety of different approaches and solutions. Public policies in many countries to foster startups and entrepreneurship stems from this view. Take, for instance, the policy of regulatory sandboxing, i.e. the idea that for a limited time certain sectors should not or only lightly be regulated…(More)”.