The Open Data Maturity Ranking is shoddy – it badly needs to be re-thought


Article by Olesya Grabova: “Digitalising government is essential for Europe’s future innovation and economic growth and one of the keys to achieving this is open data – information that public entities gather, create, or fund, and it’s accessible to all to freely use.

This includes everything from public budget details to transport schedules. Open data’s benefits are vast — it fuels research, boosts innovation, and can even save lives in wartime through the creation of chatbots with information about bomb shelter locations. It’s estimated that its economic value will reach a total of EUR 194 billion for EU countries and the UK by 2030.

This is why correctly measuring European countries’ progress in open data is so important. And that’s why the European Commission developed the Open Data Maturity (ODM) ranking, which annually measures open data quality, policies, online portals, and impact across 35 European countries.

Alas, however, it doesn’t work as well as it should and this needs to be addressed.

A closer look at the report’s overall approach reveals the ranking hardly reflects countries’ real progress when it comes to open data. This flawed system, rather than guiding countries towards genuine improvement, risks misrepresenting their actual progress and misleads citizens about their country’s advancements, which further stalls opportunities for innovation.

Take Slovakia. It’s apparently the biggest climber,  leaping from 29th to 10th place in just over a year. One would expect that the country has made significant progress in making public sector information available and stimulating its reuse – one of the OMD assessment’s key elements.

A deeper examination reveals that this isn’t the case. Looking at the ODM’s methodology highlights where it falls short… and how it can be fixed…(More)”.

The economic research policymakers actually need


Blog by Jed Kolko: “…The structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.

The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:

  1. New measures of the economy
  2. Broad literature reviews
  3. Analyses that directly quantify or simulate policy decisions.

If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.

The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals. Here are some examples:

Millions of gamers advance biomedical research


Article by McGill: “…4.5 million gamers around the world have advanced medical science by helping to reconstruct microbial evolutionary histories using a minigame included inside the critically and commercially successful video game, Borderlands 3. Their playing has led to a significantly refined estimate of the relationships of microbes in the human gut. The results of this collaboration will both substantially advance our knowledge of the microbiome and improve on the AI programs that will be used to carry out this work in future.

By playing Borderlands Science, a mini-game within the looter-shooter video game Borderlands 3, these players have helped trace the evolutionary relationships of more than a million different kinds of bacteria that live in the human gut, some of which play a crucial role in our health. This information represents an exponential increase in what we have discovered about the microbiome up till now. By aligning rows of tiles which represent the genetic building blocks of different microbes, humans have been able to take on tasks that even the best existing computer algorithms have been unable to solve yet…(More) (and More)”.

Evidence Ecosystems and the Challenge of Humanising and Normalising Evidence


Article by Geoff Mulgan: “It is reasonable to assume that the work of governments, businesses and civil society goes better if the people making decisions are well-informed, using reliable facts and strong evidence rather than only hunch and anecdote.  The term ‘evidence ecosystem’1  is a useful shorthand for the results of systematic attempts to make this easier, enabling decision makers, particularly in governments, to access the best available evidence, in easily digestible forms and when it’s needed.  

…This sounds simple.  But these ecosystems are as varied as ecosystems in nature.  How they work depends on many factors, including how political or technical the issues are; the presence or absence of confident, well-organised professions; the availability of good quality evidence; whether there is a political culture that values research; and much more.

In particular, the paper argues that the next generation of evidence ecosystems need a sharper understanding of how the supply of evidence meets demand, and the human dimension of evidence.  That means cultivating lasting relationships rather than relying too much on a linear flow of evidence from researchers to decision-makers; it means using conversation as much as prose reports to ensure evidence is understood and acted on; and it means making use of stories as well as dry analysis.  It depends, in other words, on recognising that the users of evidence are humans.

In terms of prescription the paper emphasises:

  • Sustainability/normalisation: the best approaches are embedded, part of the daily life of decision-making rather than depending on one-off projects and programmes.  This applies both to evidence and to data.  Yet embeddedness is the exception rather than the rule.
  • Multiplicity: multiple types of knowledge, and logics, are relevant to decisions, which is why people and institutions that understand these different logics are so vital.  
  • Credibility and relationships: the intermediaries who connect the supply and demand of knowledge need to be credible, with both depth of knowledge and an ability to interpret it for diverse audiences, and they need to be able to create and maintain relationships, which will usually be either place or topic based, and will take time to develop, with the communication of evidence often done best in conversation.
  • Stories: influencing decision-makers depends on indirect as well as direct communication, since the media in all their forms play a crucial role in validating evidence and evidence travels best with stories, vignettes and anecdotes.

In short, while evidence is founded on rigorous analysis, good data and robust methods, it also needs to be humanised – embedded in relationships, brought alive in conversations and vivid, human stories – and normalised, becoming part of everyday work…(More)”.

How Copyright May Destroy Our Access To The World’s Academic Knowledge


Article by Glyn Moody: “The shift from analogue to digital has had a massive impact on most aspects of life. One area where that shift has the potential for huge benefits is in the world of academic publishing. Academic papers are costly to publish and distribute on paper, but in a digital format they can be shared globally for almost no cost. That’s one of the driving forces behind the open access movement. But as Walled Culture has reported, resistance from the traditional publishing world has slowed the shift to open access, and undercut the benefits that could flow from it.

That in itself is bad news, but new research from Martin Paul Eve (available as open access) shows that the way the shift to digital has been managed by publishers brings with it a new problem. For all their flaws, analogue publications have the great virtue that they are durable: once a library has a copy, it is likely to be available for decades, if not centuries. Digital scholarly articles come with no such guarantee. The Internet is constantly in flux, with many publishers and sites closing down each year, often without notice. That’s a problem when sites holding archival copies of scholarly articles vanish, making it harder, perhaps impossible, to access important papers. Eve explored whether publishers were placing copies of the articles they published in key archives. Ideally, digital papers would be available in multiple archives to ensure resilience, but the reality is that very few publishers did this. Ars Technica has a good summary of Eve’s results:

When Eve broke down the results by publisher, less than 1 percent of the 204 publishers had put the majority of their content into multiple archives. (The cutoff was 75 percent of their content in three or more archives.) Fewer than 10 percent had put more than half their content in at least two archives. And a full third seemed to be doing no organized archiving at all.

At the individual publication level, under 60 percent were present in at least one archive, and over a quarter didn’t appear to be in any of the archives at all. (Another 14 percent were published too recently to have been archived or had incomplete records.)..(More)”.

Monitoring global trade using data on vessel traffic


Article by Graham Pilgrim, Emmanuelle Guidetti and Annabelle Mourougane: “Rising uncertainties and geo-political tensions, together with more complex trade relations have increased the demand for data and tools to monitor global trade in a timely manner. At the same time, advances in Big Data Analytics and access to a huge quantity of alternative data – outside the realm of official statistics – have opened new avenues to monitor trade. These data can help identify bottlenecks and disruptions in real time but need to be cleaned and validated.

One such alternative data source is the Automatic Identification System (AIS), developed by the International Maritime Organisation, facilitating the tracking of vessels across the globe. The system includes messages transmitted by ships to land or satellite receivers, available in quasi real time. While it was primarily designed to ensure vessel safety, this data is particularly well suited for providing insights on trade developments, as over 80% in volume of international merchandise trade is carried by sea (UNCTAD, 2022). Furthermore, AIS data holds granular vessel information and detailed location data, which combined with other data sources can enable the identification of activity at a port (or even berth) level, by vessel type or by the jurisdiction of vessel ownership.

For a number of years, the UN Global Platform has made AIS data available to those compiling official statistics, such as National Statistics Offices (NSOs) or International Organisations. This has facilitated the development of new methodologies, for instance the automated identification of port locations (Irish Central Statistics Office, 2022). The data has also been exploited by data scientists and research centres to monitor trade in specific commodities such as Liquefied Natural Gas (QuantCube Technology, 2022) or to analyse port and shipping operations in a specific country (Tsalamanis et al., 2018). Beyond trade, the dataset has been used to track CO2 emissions from the maritime sector (Clarke et al., 2023).

New work from the OECD Statistics and Data Directorate contributes to existing research in this field in two major ways. First, it proposes a new methodology to identify ports, at a higher level of precision than in past research. Second, it builds indicators to monitor port congestion and trends in maritime trade flows and provides a tool to get detailed information and better understand those flows…(More)”.

What Does Information Integrity Mean for Democracies?


Article by Kamya Yadav and Samantha Lai: “Democracies around the world are encountering unique challenges with the rise of new technologies. Experts continue to debate how social media has impacted democratic discourse, pointing to how algorithmic recommendationsinfluence operations, and cultural changes in norms of communication alter the way people consume information. Meanwhile, developments in artificial intelligence (AI) surface new concerns over how the technology might affect voters’ decision-making process. Already, we have seen its increased use in relation to political campaigning. 

In the run-up to Pakistan’s 2024 presidential elections, former Prime Minister Imran Khan used an artificially generated speech to campaign while imprisoned. Meanwhile, in the United States, a private company used an AI-generated imitation of President Biden’s voice to discourage people from voting. In response, the Federal Communications Commission outlawed the use of AI-generated robocalls.

Evolving technologies present new threats. Disinformation, misinformation, and propaganda are all different faces of the same problem: Our information environment—the ecosystem in which we disseminate, create, receive, and process information—is not secure and we lack coherent goals to direct policy actions. Formulating short-term, reactive policy to counter or mitigate the effects of disinformation or propaganda can only bring us so far. Beyond defending democracies from unending threats, we should also be looking at what it will take to strengthen them. This begs the question: How do we work toward building secure and resilient information ecosystems? How can policymakers and democratic governments identify policy areas that require further improvement and shape their actions accordingly?…(More)”.

Power and Governance in the Age of AI


Reflections by several experts: “The best way to think about ChatGPT is as the functional equivalent of expensive private education and tutoring. Yes, there is a free version, but there is also a paid subscription that gets you access to the latest breakthroughs and a more powerful version of the model. More money gets you more power and privileged access. As a result, in my courses at Middlebury College this spring, I was obliged to include the following statement in my syllabus:

“Policy on the use of ChatGPT: You may all use the free version however you like and are encouraged to do so. For purposes of equity, use of the subscription version is forbidden and will be considered a violation of the Honor Code. Your professor has both versions and knows the difference. To ensure you are learning as much as possible from the course readings, careful citation will be mandatory in both your informal and formal writing.”

The United States fails to live up to its founding values when it supports a luxury brand-driven approach to educating its future leaders that is accessible to the privileged and a few select lottery winners. One such “winning ticket” student in my class this spring argued that the quality-education-for-all issue was of such importance for the future of freedom that he would trade his individual good fortune at winning an education at Middlebury College for the elimination of ALL elite education in the United States so that quality education could be a right rather than a privilege.

A democracy cannot function if the entire game seems to be rigged and bought by elites. This is true for the United States and for democracies in the making or under challenge around the world. Consequently, in partnership with other liberal democracies, the U.S. government must do whatever it can to render both public and private governance more transparent and accountable. We should not expect authoritarian states to help us uphold liberal democratic values, nor should we expect corporations to do so voluntarily…(More)”.

Limiting Data Broker Sales in the Name of U.S. National Security: Questions on Substance and Messaging


Article by Peter Swire and Samm Sacks: “A new executive order issued today contains multiple provisions, most notably limiting bulk sales of personal data to “countries of concern.” The order has admirable national security goals but quite possibly would be ineffective and may be counterproductive. There are serious questions about both the substance and the messaging of the order. 

The new order combines two attractive targets for policy action. First, in this era of bipartisan concern about China, the new order would regulate transactions specifically with “countries of concern,” notably China, but also others such as Iran and North Korea. A key rationale for the order is to prevent China from amassing sensitive information about Americans, for use in tracking and potentially manipulating military personnel, government officials, or anyone else of interest to the Chinese regime. 

Second, the order targets bulk sales, to countries of concern, of sensitive personal information by data brokers, such as genomic, biometric, and precise geolocation data. The large and growing data broker industry has come under well-deserved bipartisan scrutiny for privacy risks. Congress has held hearings and considered bills to regulate such brokers. California has created a data broker registry and last fall passed the Delete Act to enable individuals to require deletion of their personal data. In January, the Federal Trade Commission issued an order prohibiting data broker Outlogic from sharing or selling sensitive geolocation data, finding that the company had acted without customer consent, in an unfair and deceptive manner. In light of these bipartisan concerns, a new order targeting both China and data brokers has a nearly irresistible political logic.

Accurate assessment of the new order, however, requires an understanding of this order as part of a much bigger departure from the traditional U.S. support for free and open flows of data across borders. Recently, in part for national security reasons, the U.S. has withdrawn its traditional support in the World Trade Organization (WTO) for free and open data flows, and the Department of Commerce has announced a proposed rule, in the name of national security, that would regulate U.S.-based cloud providers when selling to foreign countries, including for purposes of training artificial intelligence (AI) models. We are concerned that these initiatives may not sufficiently account for the national security advantages of the long-standing U.S. position and may have negative effects on the U.S. economy.

Despite the attractiveness of the regulatory targets—data brokers and countries of concern—U.S. policymakers should be cautious as they implement this order and the other current policy changes. As discussed below, there are some possible privacy advances as data brokers have to become more careful in their sales of data, but a better path would be to ensure broader privacy and cybersecurity safeguards to better protect data and critical infrastructure systems from sophisticated cyberattacks from China and elsewhere…(More)”.

Once upon a bureaucrat: Exploring the role of stories in government


Article by Thea Snow: “When you think of a profession associated with stories, what comes to mind? Journalist, perhaps? Or author? Maybe, at a stretch, you might think about a filmmaker. But I would hazard a guess that “public servant” would unlikely be one of the first professions that come to mind. However, recent research suggests that we should be thinking more deeply about the connections between stories and government.

Since 2021, the Centre for Public Impact, in partnership with Dusseldorp Forum and Hands Up Mallee, has been exploring the role of storytelling in the context of place-based systems change work. Our first report, Storytelling for Systems Change: Insights from the Field, focused on the way communities use stories to support place-based change. Our second report, Storytelling for Systems Change: Listening to Understand, focused more on how stories are perceived and used by those in government who are funding and supporting community-led systems change initiatives.

To shape these reports, we have spent the past few years speaking to community members, collective impact backbone teams, storytelling experts, academics, public servants, data analysts, and more. Here’s some of what we’ve heard…(More)”.