Unlocking the value of supply chain data across industries


MIT Technology Review Insights: “The product shortages and supply-chain delays of the global covid-19 pandemic are still fresh memories. Consumers and industry are concerned that the next geopolitical climate event may have a similar impact. Against a backdrop of evolving regulations, these conditions mean manufacturers want to be prepared against short supplies, concerned customers, and weakened margins.

For supply chain professionals, achieving a “phygital” information flow—the blending of physical and digital data—is key to unlocking resilience and efficiency. As physical objects travel through supply chains, they generate a rich flow of data about the item and its journey—from its raw materials, its manufacturing conditions, even its expiration date—bringing new visibility and pinpointing bottlenecks.

This phygital information flow offers significant advantages, enhancing the ability to create rich customer experiences to satisfying environmental, social, and corporate governance (ESG) goals. In a 2022 EY global survey of executives, 70% of respondents agreed that a sustainable supply chain will increase their company’s revenue.

For disparate parties to exchange product information effectively, they require a common framework and universally understood language. Among supply chain players, data standards create a shared foundation. Standards help uniquely identify, accurately capture, and automatically share critical information about products, locations, and assets across trading communities…(More)”.

AI and new standards promise to make scientific data more useful by making it reusable and accessible


Article by Bradley Wade Bishop: “…AI makes it highly desirable for any data to be machine-actionable – that is, usable by machines without human intervention. Now, scholars can consider machines not only as tools but also as potential autonomous data reusers and collaborators.

The key to machine-actionable data is metadata. Metadata are the descriptions scientists set for their data and may include elements such as creator, date, coverage and subject. Minimal metadata is minimally useful, but correct and complete standardized metadata makes data more useful for both people and machines.

It takes a cadre of research data managers and librarians to make machine-actionable data a reality. These information professionals work to facilitate communication between scientists and systems by ensuring the quality, completeness and consistency of shared data.

The FAIR data principles, created by a group of researchers called FORCE11 in 2016 and used across the world, provide guidance on how to enable data reuse by machines and humans. FAIR data is findable, accessible, interoperable and reusable – meaning it has robust and complete metadata.

In the past, I’ve studied how scientists discover and reuse data. I found that scientists tend to use mental shortcuts when they’re looking for data – for example, they may go back to familiar and trusted sources or search for certain key terms they’ve used before. Ideally, my team could build this decision-making process of experts and remove as many biases as possible to improve AI. The automation of these mental shortcuts should reduce the time-consuming chore of locating the right data…(More)”.

Toward a 21st Century National Data Infrastructure: Enhancing Survey Programs by Using Multiple Data Sources


Report by National Academies of Sciences, Engineering, and Medicine: “Much of the statistical information currently produced by federal statistical agencies – information about economic, social, and physical well-being that is essential for the functioning of modern society – comes from sample surveys. In recent years, there has been a proliferation of data from other sources, including data collected by government agencies while administering programs, satellite and sensor data, private-sector data such as electronic health records and credit card transaction data, and massive amounts of data available on the internet. How can these data sources be used to enhance the information currently collected on surveys, and to provide new frontiers for producing information and statistics to benefit American society?…(More)”.

How Will the State Think With the Assistance of ChatGPT? The Case of Customs as an Example of Generative Artificial Intelligence in Public Administrations


Paper by Thomas Cantens: “…discusses the implications of Generative Artificial Intelligence (GAI) in public administrations and the specific questions it raises compared to specialized and « numerical » AI, based on the example of Customs and the experience of the World Customs Organization in the field of AI and data strategy implementation in Member countries.

At the organizational level, the advantages of GAI include cost reduction through internalization of tasks, uniformity and correctness of administrative language, access to broad knowledge, and potential paradigm shifts in fraud detection. At this level, the paper highlights three facts that distinguish GAI from specialized AI : i) GAI is less associated to decision-making process than specialized AI in public administrations so far, ii) the risks usually associated with GAI are often similar to those previously associated with specialized AI, but, while certain risks remain pertinent, others lose significance due to the constraints imposed by the inherent limitations of GAI technology itself when implemented in public administrations, iii) training data corpus for GAI becomes a strategic asset for public administrations, maybe more than the algorithms themselves, which was not the case for specialized AI.

At the individual level, the paper emphasizes the “language-centric” nature of GAI in contrast to “number-centric” AI systems implemented within public administrations up until now. It discusses the risks of replacement or enslavement of civil servants to the machines by exploring the transformative impact of GAI on the intellectual production of the State. The paper pleads for the development of critical vigilance and critical thinking as specific skills for civil servants who are highly specialized and will have to think with the assistance of a machine that is eclectic by nature…(More)”.

Valuing Data: The Role of Satellite Data in Halting the Transmission of Polio in Nigeria


Article by Mariel Borowitz, Janet Zhou, Krystal Azelton & Isabelle-Yara Nassar: “There are more than 1,000 satellites in orbit right now collecting data about what’s happening on the Earth. These include government and commercial satellites that can improve our understanding of climate change; monitor droughts, floods, and forest fires; examine global agricultural output; identify productive locations for fishing or mining; and many other purposes. We know the data provided by these satellites is important, yet it is very difficult to determine the exact value that each of these systems provides. However, with only a vague sense of “value,” it is hard for policymakers to ensure they are making the right investments in Earth observing satellites.

NASA’s Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES), carried out in collaboration with Resources for the Future, aimed to address this by analyzing specific use cases of satellite data to determine their monetary value. VALUABLES proposed a “value of information” approach focusing on cases in which satellite data informed a specific decision. Researchers could then compare the outcome of that decision with what would have occurredif no satellite data had been available. Our project, which was funded under the VALUABLES program, examined how satellite data contributed to efforts to halt the transmission of Polio in Nigeria…(More)”

Promoting Sustainable Data Use in State Programs


Toolkit by Chapin Hall:”…helps public sector agencies build the culture and infrastructure to apply data analysis routinely, effectively, and accurately—what we call “sustainable data use.”  It is meant to serve as a hands-on resource, containing strategies and tools for agencies seeking to grow their analytic capacity. 

Administrative data can be a rich source of information for human services agencies seeking to improve programs. But too often, data use in state agencies is temporary, dependent on funds and training from short-term resources such as pilot projects and grants. How can agencies instead move from data to knowledge to action routinely, creating a reinforcing cycle of evidence-building and program improvement?

Chapin Hall experts and experts at partner organizations set out to determine who achieves sustainable data use and how they go about doing so. Building on previous work and the results of a literature review, we identified domains that can significantly influence an agency’s ability to establish sustainable data practices. We then focused on eight state TANF agencies and three partner organizations with demonstrated successes in one or more of these domains, and we interviewed staff who work directly with data to learn more about what strategies they used to achieve success. We focused on what worked rather than what didn’t. From those interviews, we identified common themes, developed case studies, and generated tools to help agencies develop sustainable data practices…(More)”.

Unleashing possibilities, ignoring risks: Why we need tools to manage AI’s impact on jobs


Article by Katya Klinova and Anton Korinek: “…Predicting the effects of a new technology on labor demand is difficult and involves significant uncertainty. Some would argue that, given the uncertainty, we should let the “invisible hand” of the market decide our technological destiny. But we believe that the difficulty of answering the question “Who is going to benefit and who is going to lose out?” should not serve as an excuse for never posing the question in the first place. As we emphasized, the incentives for cutting labor costs are artificially inflated. Moreover, the invisible hand theorem does not hold for technological change. Therefore, a failure to investigate the distribution of benefits and costs of AI risks invites a future with too many “so-so” uses of AI—uses that concentrate gains while distributing the costs. Although predictions about the downstream impacts of AI systems will always involve some uncertainty, they are nonetheless useful to spot applications of AI that pose the greatest risks to labor early on and to channel the potential of AI where society needs it the most.

In today’s society, the labor market serves as a primary mechanism for distributing income as well as for providing people with a sense of meaning, community, and purpose. It has been documented that job loss can lead to regional decline, a rise in “deaths of despair,” addiction and mental health problems. The path that we lay out aims to prevent abrupt job losses or declines in job quality on the national and global scale, providing an additional tool for managing the pace and shape of AI-driven labor market transformation.

Nonetheless, we do not want to rule out the possibility that humanity may eventually be much happier in a world where machines do a lot more economically valuable work. Even despite our best efforts to manage the pace and shape of AI labor market disruption through regulation and worker-centric practices, we may still face a future with significantly reduced human labor demand. Should the demand for human labor decrease permanently with the advancement of AI, timely policy responses will be needed to address both the lost incomes as well as the lost sense of meaning and purpose. In the absence of significant efforts to distribute the gains from advanced AI more broadly, the possible devaluation of human labor would deeply impact income distribution and democratic institutions’ sustainability. While a jobless future is not guaranteed, its mere possibility and the resulting potential societal repercussions demand serious consideration. One promising proposal to consider is to create an insurance policy against a dramatic decrease in the demand for human labor that automatically kicks in if the share of income received by workers declines, for example a “seed” Universal Basic Income that starts at a very small level and remains unchanged if workers continue to prosper but automatically rises if there is large scale worker displacement…(More)”.

Informing the Global Data Future: Benchmarking Data Governance Frameworks


Paper by Sara Marcucci, Natalia González Alarcón, Stefaan G. Verhulst and Elena Wüllhorst: “Data has become a critical trans-national and cross-border resource. Yet, the lack of a well-defined approach to using it poses challenges to harnessing its value. This article explores the increasing importance of global data governance due to the rapid growth of data, and the need for responsible data practices. The purpose of this paper is to compare approaches and identify patterns in the emergent data governance ecosystem within sectors close to the international development field, ultimately presenting key takeaways and reflections on when and why a global data governance framework may be needed. Overall, the paper provides information about the conditions when a more holistic, coordinated transnational approach to data governance may be needed to responsibly manage the global flow of data. The report does this by (a) considering conditions specified by the literature that may be conducive to global data governance, and (b) analyzing and comparing existing frameworks, specifically investigating six key elements: purpose, principles, anchoring documents, data description and lifecycle, processes, and practices. The article closes with a series of final recommendations, which include adopting a broader concept of data stewardship to reconcile data protection and promotion, focusing on responsible reuse of data to unlock socioeconomic value, harmonizing meanings to operationalize principles, incorporating global human rights frameworks to provide common North Stars, unifying key definitions of data, adopting a data lifecycle approach, incorporating participatory processes and collective agency, investing in new professions with specific roles, improving accountability through oversight and compliance mechanisms, and translating recommendations into practical tools…(More)”

The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk


Book by Igor Tulchinsky and Christopher E. Mason: “… about two powerful, and symbiotic, trends: the rapid development and use of artificial intelligence and big data to enhance prediction, as well as the often paradoxical effects of these better predictions on our understanding of risk and the ways we live. Beginning with dramatic advances in quantitative investing and precision medicine, this book explores how predictive technology is quietly reshaping our world in fundamental ways, from crime fighting and warfare to monitoring individual health and elections.

As prediction grows more robust, it also alters the nature of the accompanying risk, setting up unintended and unexpected consequences. The Age of Prediction details how predictive certainties can bring about complacency or even an increase in risks—genomic analysis might lead to unhealthier lifestyles or a GPS might encourage less attentive driving. With greater predictability also comes a degree of mystery, and the authors ask how narrower risks might affect markets, insurance, or risk tolerance generally. Can we ever reduce risk to zero? Should we even try? This book lays an intriguing groundwork for answering these fundamental questions and maps out the latest tools and technologies that power these projections into the future, sometimes using novel, cross-disciplinary tools to map out cancer growth, people’s medical risks, and stock dynamics…(More)”.

Do People Like Algorithms? A Research Strategy


Paper by Cass R. Sunstein and Lucia Reisch: “Do people like algorithms? In this study, intended as a promissory note and a description of a research strategy, we offer the following highly preliminary findings. (1) In a simple choice between a human being and an algorithm, across diverse settings and without information about the human being or the algorithm, people in our tested groups are about equally divided in their preference. (2) When people are given a very brief account of the data on which an algorithm relies, there is a large shift in favor of the algorithm over the human being. (3) When people are given a very brief account of the experience of the relevant human being, without an account of the data on which the relevant algorithm relies, there is a moderate shift in favor of the human being. (4) When people are given both (a) a very brief account of the experience of the relevant human being and (b) a very brief account of the data on which the relevant algorithm relies, there is a large shift in favor of the algorithm over the human being. One lesson is that in the tested groups, at least one-third of people seem to have a clear preference for either a human being or an algorithm – a preference that is unaffected by brief information that seems to favor one or the other. Another lesson is that a brief account of the data on which an algorithm relies does have a significant effect on a large percentage of the tested groups, whether or not people are also given positive information about the human alternative. Across the various surveys, we do not find persistent demographic differences, with one exception: men appear to like algorithms more than women do. These initial findings are meant as proof of concept, or more accurately as a suggestion of concept, intended to inform a series of larger and more systematic studies of whether and when people prefer to rely on algorithms or human beings, and also of international and demographic differences…(More)”.