The Future Data Economy


Report by the IE University’s Center for the Governance of Change: “…summarizes the ideas and recommendations of a year of research into the possibilities of creating a data economy that is fair, competitive and secure, carried out together with experts in the field such as Andrea Renda and Stefaan Verhulst.

According to the report, the data economy represents “a fundamental reconfiguration of how value is generated, exchanged, and understood in our world today” but it remains deeply misunderstood:

  • The authors argue that data’s particular characteristics make it different from other commodities and therefore more difficult to regulate.
  • Optimizing data flows defies the sort of one-size-fits-all solutions that policymakers tend to search for in other domains, requiring instead a more nuanced, case-by-case approach. 
  • Policymakers need to strike a delicate balance between making data sufficiently accessible to foster innovation, competition, and economic growth, while regulating its access and use to protect privacy, security, and consumer rights.

The report identifies additional overarching principles that lay the groundwork for a more coherent regulatory framework and a more robust social contract in the future data economy:

  • A paradigm shift towards greater collaboration on all fronts to address the challenges and harness the opportunities of the data economy.
  • Greater data literacy at all levels of society to make better decisions, manage risks more effectively, and harness the potential of data responsibly.
  • Regaining social trust, not only a moral imperative but also a prerequisite for the long-term sustainability and viability of data governance models.

To realize this vision, the report advances 15 specific recommendations for policymakers, including:

  • Enshrining people’s digital rights through robust regulatory measures that empower them with genuine control over their digital experiences.
  • Investing in data stewards to increase companies’ ability to recognize opportunities for collaboration and respond to external data requests. 
  • Designing liability frameworks to properly identify responsibility in cases of data misuse…(More)”

Technological Progress and Rent Seeking


Paper by Vincent Glode & Guillermo Ordoñez: “We model firms’ allocation of resources across surplus-creating (i.e., productive) and surplus-appropriating (i.e., rent-seeking) activities. Our model predicts that industry-wide technological advancements, such as recent progress in data collection and processing, induce a disproportionate and socially inefficient reallocation of resources toward surplus-appropriating activities. As technology improves, firms rely more on appropriation to obtain their profits, endogenously reducing the impact of technological progress on economic progress and inflating the price of the resources used for both types of activities. We apply our theoretical insights to shed light on the rise of high-frequency trading…(More)”,

The impact of generative artificial intelligence on socioeconomic inequalities and
policy making


Paper by Valerio Capraro et al: “Generative artificial intelligence, including chatbots like ChatGPT, has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the probable impacts of generative AI on four critical domains: work, education, health, and information. Our goal is to warn about how generative AI could worsen existing inequalities while illuminating directions for using AI to resolve pervasive social problems. Generative AI in the workplace can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning but may widen the digital divide. In healthcare, it improves diagnostics and accessibility but could deepen pre-existing inequalities. For information, it democratizes content creation and access but also dramatically expands the production and proliferation of misinformation. Each section covers a specific topic, evaluates existing research, identifies critical gaps, and recommends research directions. We conclude with a section highlighting the role of policymaking to maximize generative AI’s potential to reduce inequalities while
mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We contend that these policies should promote shared prosperity through the advancement of generative AI. We suggest several concrete policies to encourage further research and debate. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI…(More)”.

Data-Driven Innovation in the Creative Industries


Open Access Book edited by Melissa Terras, Vikki Jones, Nicola Osborne, and Chris Speed: “The creative industries – the place where art, business, and technology meet in economic activity – have been hugely affected by the relatively recent digitalisation (and often monetisation) of work, home, relationships, and leisure. Such trends were accelerated by the global COVID-19 pandemic. This edited collection examines how the creative industries can be supported to make best use of opportunities in digital technology and data-driven innovation.

Since digital markets and platforms are now essential for revenue generation and audience engagement, there is a vital need for improved data and digital skills in the creative and cultural sectors. Taking a necessarily global perspective, this book explores the challenges and opportunities of data-driven approaches to creativity in different contexts across the arts, cultural, and heritage sectors. Chapters reach beyond the platforms and approaches provided by the technology sector to delve into the collaborative work that supports innovation around the interdisciplinary and cross-sectoral issues that emerge where data infrastructures and approaches meet creativity.

A novel intervention that uniquely centres the role of data in the theory and practice of creative industries’ innovation, this book is valuable reading for those researching and studying the creative economy as well for those who drive investment for the creative industries in a digitalised society…(More)”.

Creating an Integrated System of Data and Statistics on Household Income, Consumption, and Wealth: Time to Build


Report by the National Academies: “Many federal agencies provide data and statistics on inequality and related aspects of household income, consumption, and wealth (ICW). However, because the information provided by these agencies is often produced using different concepts, underlying data, and methods, the resulting estimates of poverty, inequality, mean and median household income, consumption, and wealth, as well as other statistics, do not always tell a consistent or easily interpretable story. Measures also differ in their accuracy, timeliness, and relevance so that it is difficult to address such questions as the effects of the Great Recession on household finances or of the Covid-19 pandemic and the ensuing relief efforts on household income and consumption. The presence of multiple, sometimes conflicting statistics at best muddies the waters of policy debates and, at worst, enable advocates with different policy perspectives to cherry-pick their preferred set of estimates. Achieving an integrated system of relevant, high-quality, and transparent household ICW data and statistics should go far to reduce disagreement about who has how much, and from what sources. Further, such data are essential to advance research on economic wellbeing and to ensure that policies are well targeted to achieve societal goals…(More)”.

Open Government Products (OGP)


About: “We are an experimental development team that builds technology for the public good. This includes everything from building better apps for citizens to automating the internal operations of public agencies. Our role is to accelerate the digital transformation of the Singapore Government by being a space where it can experiment with new tech practices, including new technologies, management techniques, corporate systems, and even cultural norms. Our end goal is that through our work, Singapore becomes a model of how governments can use technology to improve the public good…(More)”.

Predicting IMF-Supported Programs: A Machine Learning Approach


Paper by Tsendsuren Batsuuri, Shan He, Ruofei Hu, Jonathan Leslie and Flora Lutz: “This study applies state-of-the-art machine learning (ML) techniques to forecast IMF-supported programs, analyzes the ML prediction results relative to traditional econometric approaches, explores non-linear relationships among predictors indicative of IMF-supported programs, and evaluates model robustness with regard to different feature sets and time periods. ML models consistently outperform traditional methods in out-of-sample prediction of new IMF-supported arrangements with key predictors that align well with the literature and show consensus across different algorithms. The analysis underscores the importance of incorporating a variety of external, fiscal, real, and financial features as well as institutional factors like membership in regional financing arrangements. The findings also highlight the varying influence of data processing choices such as feature selection, sampling techniques, and missing data imputation on the performance of different ML models and therefore indicate the usefulness of a flexible, algorithm-tailored approach. Additionally, the results reveal that models that are most effective in near and medium-term predictions may tend to underperform over the long term, thus illustrating the need for regular updates or more stable – albeit potentially near-term suboptimal – models when frequent updates are impractical…(More)”.

Monitoring global trade using data on vessel traffic


Article by Graham Pilgrim, Emmanuelle Guidetti and Annabelle Mourougane: “Rising uncertainties and geo-political tensions, together with more complex trade relations have increased the demand for data and tools to monitor global trade in a timely manner. At the same time, advances in Big Data Analytics and access to a huge quantity of alternative data – outside the realm of official statistics – have opened new avenues to monitor trade. These data can help identify bottlenecks and disruptions in real time but need to be cleaned and validated.

One such alternative data source is the Automatic Identification System (AIS), developed by the International Maritime Organisation, facilitating the tracking of vessels across the globe. The system includes messages transmitted by ships to land or satellite receivers, available in quasi real time. While it was primarily designed to ensure vessel safety, this data is particularly well suited for providing insights on trade developments, as over 80% in volume of international merchandise trade is carried by sea (UNCTAD, 2022). Furthermore, AIS data holds granular vessel information and detailed location data, which combined with other data sources can enable the identification of activity at a port (or even berth) level, by vessel type or by the jurisdiction of vessel ownership.

For a number of years, the UN Global Platform has made AIS data available to those compiling official statistics, such as National Statistics Offices (NSOs) or International Organisations. This has facilitated the development of new methodologies, for instance the automated identification of port locations (Irish Central Statistics Office, 2022). The data has also been exploited by data scientists and research centres to monitor trade in specific commodities such as Liquefied Natural Gas (QuantCube Technology, 2022) or to analyse port and shipping operations in a specific country (Tsalamanis et al., 2018). Beyond trade, the dataset has been used to track CO2 emissions from the maritime sector (Clarke et al., 2023).

New work from the OECD Statistics and Data Directorate contributes to existing research in this field in two major ways. First, it proposes a new methodology to identify ports, at a higher level of precision than in past research. Second, it builds indicators to monitor port congestion and trends in maritime trade flows and provides a tool to get detailed information and better understand those flows…(More)”.

Data, Privacy Laws and Firm Production: Evidence from the GDPR


Paper by Mert Demirer, Diego J. Jiménez Hernández, Dean Li & Sida Peng: “By regulating how firms collect, store, and use data, privacy laws may change the role of data in production and alter firm demand for information technology inputs. We study how firms respond to privacy laws in the context of the EU’s General Data Protection Regulation (GDPR) by using seven years of data from a large global cloud-computing provider. Our difference-in-difference estimates indicate that, in response to the GDPR, EU firms decreased data storage by 26% and data processing by 15% relative to comparable US firms, becoming less “data-intensive.” To estimate the costs of the GDPR for firms, we propose and estimate a production function where data and computation serve as inputs to the production of “information.” We find that data and computation are strong complements in production and that firm responses are consistent with the GDPR, representing a 20% increase in the cost of data on average. Variation in the firm-level effects of the GDPR and industry-level exposure to data, however, drives significant heterogeneity in our estimates of the impact of the GDPR on production costs…(More)”

Applying AI to Rebuild Middle Class Jobs


Paper by David Autor: “While the utopian vision of the current Information Age was that computerization would flatten economic hierarchies by democratizing information, the opposite has occurred. Information, it turns out, is merely an input into a more consequential economic function, decision-making, which is the province of elite experts. The unique opportunity that AI offers to the labor market is to extend the relevance, reach, and value of human expertise. Because of AI’s capacity to weave information and rules with acquired experience to support decision-making, it can be applied to enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks that are currently arrogated to elite experts, e.g., medical care to doctors, document production to lawyers, software coding to computer engineers, and undergraduate education to professors. My thesis is not a forecast but an argument about what is possible: AI, if used well, can assist with restoring the middle-skill, middle-class heart of the US labor market that has been hollowed out by automation and globalization…(More)”.