Article by Stefaan Verhulst and Despite decades of investment in statistical systems and open data initiatives, official data remains difficult to discover, interpret, and apply in practice. The challenge is no longer one of availability, but of (re)usability. This persistent gap underscores a broader paradox at the heart of contemporary data governance: data may be open, yet it remains functionally inaccessible for many intended users.
In this context, the International Monetary Fund has been a pioneer in exploring how artificial intelligence and open data can intersect to address this usability challenge. Its StatGPT: AI for Official Statistics report, by James Tebrake, Bachir Boukherouaa, Jeff Danforth, and Niva Harikrishnan, offers a timely and important contribution to this evolving conversation – pointing toward a future where AI can make official data more navigable, interpretable, and actionable.
The data challenge is no longer just about availability, but about (re)usability.
The report provides a detailed account of the friction users face across the data lifecycle. Even highly motivated users must navigate fragmented portals, inconsistent terminology, and siloed datasets, often spending significant time assembling information that should be readily accessible.
The result is a fragmented ecosystem in which metadata and data are distributed across institutions and platforms, forcing users to navigate multiple systems and standards—and to reconstruct context—before they can assess whether the data is re-usable.
This resonates strongly with broader observations across the open data ecosystem: access alone does not guarantee impact. Without the ability to meaningfully engage with data, openness risks becoming performative rather than transformative…(More)”.