Experts: 90% of Online Content Will Be AI-Generated by 2026


Article by Maggie Harrison: “Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.

“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”

“In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.”…

The report focused pretty heavily on disinformation, notably that driven by deepfake technology. But that 90 percent figure raises other questions, too — what do AI systems like Dall-E and GPT-3 mean for artists, writers, and other content-generating creators? And circling back to disinformation once more, what will the dissemination of information, not to mention the consumption of it, actually look like in an era driven by that degree of AI-generated digital stuff?…(More)’

Unlocking the value of supply chain data across industries


MIT Technology Review Insights: “The product shortages and supply-chain delays of the global covid-19 pandemic are still fresh memories. Consumers and industry are concerned that the next geopolitical climate event may have a similar impact. Against a backdrop of evolving regulations, these conditions mean manufacturers want to be prepared against short supplies, concerned customers, and weakened margins.

For supply chain professionals, achieving a “phygital” information flow—the blending of physical and digital data—is key to unlocking resilience and efficiency. As physical objects travel through supply chains, they generate a rich flow of data about the item and its journey—from its raw materials, its manufacturing conditions, even its expiration date—bringing new visibility and pinpointing bottlenecks.

This phygital information flow offers significant advantages, enhancing the ability to create rich customer experiences to satisfying environmental, social, and corporate governance (ESG) goals. In a 2022 EY global survey of executives, 70% of respondents agreed that a sustainable supply chain will increase their company’s revenue.

For disparate parties to exchange product information effectively, they require a common framework and universally understood language. Among supply chain players, data standards create a shared foundation. Standards help uniquely identify, accurately capture, and automatically share critical information about products, locations, and assets across trading communities…(More)”.

Digital Empires: The Global Battle to Regulate Technology


Book by Anu Bradford: “The global battle among the three dominant digital powers—the United States, China, and the European Union—is intensifying. All three regimes are racing to regulate tech companies, with each advancing a competing vision for the digital economy while attempting to expand its sphere of influence in the digital world. In Digital Empires, her provocative follow-up to The Brussels Effect, Anu Bradford explores a rivalry that will shape the world in the decades to come.

Across the globe, people dependent on digital technologies have become increasingly alarmed that their rapid adoption and transformation have ushered in an exceedingly concentrated economy where a few powerful companies control vast economic wealth and political power, undermine data privacy, and widen the gap between economic winners and losers. In response, world leaders are variously embracing the idea of reining in the most dominant tech companies. Bradford examines three competing regulatory approaches—the American market-driven model, the Chinese state-driven model, and the European rights-driven regulatory model—and discusses how governments and tech companies navigate the inevitable conflicts that arise when these regulatory approaches collide in the international domain. Which digital empire will prevail in the contest for global influence remains an open question, yet their contrasting strategies are increasingly clear.

Digital societies are at an inflection point. In the midst of these unfolding regulatory battles, governments, tech companies, and digital citizens are making important choices that will shape the future ethos of the digital society. Digital Empires lays bare the choices we face as societies and individuals, explains the forces that shape those choices, and illuminates the immense stakes involved for everyone who uses digital technologies….(More)”

AI and new standards promise to make scientific data more useful by making it reusable and accessible


Article by Bradley Wade Bishop: “…AI makes it highly desirable for any data to be machine-actionable – that is, usable by machines without human intervention. Now, scholars can consider machines not only as tools but also as potential autonomous data reusers and collaborators.

The key to machine-actionable data is metadata. Metadata are the descriptions scientists set for their data and may include elements such as creator, date, coverage and subject. Minimal metadata is minimally useful, but correct and complete standardized metadata makes data more useful for both people and machines.

It takes a cadre of research data managers and librarians to make machine-actionable data a reality. These information professionals work to facilitate communication between scientists and systems by ensuring the quality, completeness and consistency of shared data.

The FAIR data principles, created by a group of researchers called FORCE11 in 2016 and used across the world, provide guidance on how to enable data reuse by machines and humans. FAIR data is findable, accessible, interoperable and reusable – meaning it has robust and complete metadata.

In the past, I’ve studied how scientists discover and reuse data. I found that scientists tend to use mental shortcuts when they’re looking for data – for example, they may go back to familiar and trusted sources or search for certain key terms they’ve used before. Ideally, my team could build this decision-making process of experts and remove as many biases as possible to improve AI. The automation of these mental shortcuts should reduce the time-consuming chore of locating the right data…(More)”.

City/Science Intersections: A Scoping Review of Science for Policy in Urban Contexts


Paper by Gabriela Manrique Rueda et al: “Science is essential for cities to understand and intervene on the increasing global risks. However, challenges in effectively utilizing scientific knowledge in decision-making processes limit cities’ abilities to address these risks. This scoping review examines the development of science for urban policy, exploring the contextual factors, organizational structures, and mechanisms that facilitate or hinder the integration of science and policy. It investigates the challenges faced and the outcomes achieved. The findings reveal that science has gained influence in United Nations (UN) policy discourses, leading to the expansion of international, regional, and national networks connecting science and policy. Boundary-spanning organizations and collaborative research initiatives with stakeholders have emerged, creating platforms for dialogue, knowledge sharing, and experimentation. However, cultural differences between the science and policy realms impede the effective utilization of scientific knowledge in decision-making. While efforts are being made to develop methods and tools for knowledge co-production, translation, and mobilization, more attention is needed to establish science-for-policy organizational structures and address power imbalances in research processes that give rise to ethical challenges…(More)”.

Toward a 21st Century National Data Infrastructure: Enhancing Survey Programs by Using Multiple Data Sources


Report by National Academies of Sciences, Engineering, and Medicine: “Much of the statistical information currently produced by federal statistical agencies – information about economic, social, and physical well-being that is essential for the functioning of modern society – comes from sample surveys. In recent years, there has been a proliferation of data from other sources, including data collected by government agencies while administering programs, satellite and sensor data, private-sector data such as electronic health records and credit card transaction data, and massive amounts of data available on the internet. How can these data sources be used to enhance the information currently collected on surveys, and to provide new frontiers for producing information and statistics to benefit American society?…(More)”.

How to improve economic forecasting


Article by Nicholas Gruen: “Today’s four-day weather forecasts are as accurate as one-day forecasts were 30 years ago. Economic forecasts, on the other hand, aren’t noticeably better. Former Federal Reserve chair Ben Bernanke should ponder this in his forthcoming review of the Bank of England’s forecasting.

There’s growing evidence that we can improve. But myopia and complacency get in the way. Myopia is an issue because economists think technical expertise is the essence of good forecasting when, actually, two things matter more: forecasters’ understanding of the limits of their expertise and their judgment in handling those limits.

Enter Philip Tetlock, whose 2005 book on geopolitical forecasting showed how little experts added to forecasting done by informed non-experts. To compare forecasts between the two groups, he forced participants to drop their vague weasel words — “probably”, “can’t be ruled out” — and specify exactly what they were forecasting and with what probability. 

That started sorting the sheep from the goats. The simple “point forecasts” provided by economists — such as “growth will be 3.0 per cent” — are doubly unhelpful in this regard. They’re silent about what success looks like. If I have forecast 3.0 per cent growth and actual growth comes in at 3.2 per cent — did I succeed or fail? Such predictions also don’t tell us how confident the forecaster is.

By contrast, “a 70 per cent chance of rain” specifies a clear event with a precise estimation of the weather forecaster’s confidence. Having rigorously specified the rules of the game, Tetlock has since shown how what he calls “superforecasting” is possible and how diverse teams of superforecasters do even better. 

What qualities does Tetlock see in superforecasters? As well as mastering necessary formal techniques, they’re open-minded, careful, curious and self-critical — in other words, they’re not complacent. Aware, like Socrates, of how little they know, they’re constantly seeking to learn — from unfolding events and from colleagues…(More)”.

How Will the State Think With the Assistance of ChatGPT? The Case of Customs as an Example of Generative Artificial Intelligence in Public Administrations


Paper by Thomas Cantens: “…discusses the implications of Generative Artificial Intelligence (GAI) in public administrations and the specific questions it raises compared to specialized and « numerical » AI, based on the example of Customs and the experience of the World Customs Organization in the field of AI and data strategy implementation in Member countries.

At the organizational level, the advantages of GAI include cost reduction through internalization of tasks, uniformity and correctness of administrative language, access to broad knowledge, and potential paradigm shifts in fraud detection. At this level, the paper highlights three facts that distinguish GAI from specialized AI : i) GAI is less associated to decision-making process than specialized AI in public administrations so far, ii) the risks usually associated with GAI are often similar to those previously associated with specialized AI, but, while certain risks remain pertinent, others lose significance due to the constraints imposed by the inherent limitations of GAI technology itself when implemented in public administrations, iii) training data corpus for GAI becomes a strategic asset for public administrations, maybe more than the algorithms themselves, which was not the case for specialized AI.

At the individual level, the paper emphasizes the “language-centric” nature of GAI in contrast to “number-centric” AI systems implemented within public administrations up until now. It discusses the risks of replacement or enslavement of civil servants to the machines by exploring the transformative impact of GAI on the intellectual production of the State. The paper pleads for the development of critical vigilance and critical thinking as specific skills for civil servants who are highly specialized and will have to think with the assistance of a machine that is eclectic by nature…(More)”.

Valuing Data: The Role of Satellite Data in Halting the Transmission of Polio in Nigeria


Article by Mariel Borowitz, Janet Zhou, Krystal Azelton & Isabelle-Yara Nassar: “There are more than 1,000 satellites in orbit right now collecting data about what’s happening on the Earth. These include government and commercial satellites that can improve our understanding of climate change; monitor droughts, floods, and forest fires; examine global agricultural output; identify productive locations for fishing or mining; and many other purposes. We know the data provided by these satellites is important, yet it is very difficult to determine the exact value that each of these systems provides. However, with only a vague sense of “value,” it is hard for policymakers to ensure they are making the right investments in Earth observing satellites.

NASA’s Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES), carried out in collaboration with Resources for the Future, aimed to address this by analyzing specific use cases of satellite data to determine their monetary value. VALUABLES proposed a “value of information” approach focusing on cases in which satellite data informed a specific decision. Researchers could then compare the outcome of that decision with what would have occurredif no satellite data had been available. Our project, which was funded under the VALUABLES program, examined how satellite data contributed to efforts to halt the transmission of Polio in Nigeria…(More)”

Promoting Sustainable Data Use in State Programs


Toolkit by Chapin Hall:”…helps public sector agencies build the culture and infrastructure to apply data analysis routinely, effectively, and accurately—what we call “sustainable data use.”  It is meant to serve as a hands-on resource, containing strategies and tools for agencies seeking to grow their analytic capacity. 

Administrative data can be a rich source of information for human services agencies seeking to improve programs. But too often, data use in state agencies is temporary, dependent on funds and training from short-term resources such as pilot projects and grants. How can agencies instead move from data to knowledge to action routinely, creating a reinforcing cycle of evidence-building and program improvement?

Chapin Hall experts and experts at partner organizations set out to determine who achieves sustainable data use and how they go about doing so. Building on previous work and the results of a literature review, we identified domains that can significantly influence an agency’s ability to establish sustainable data practices. We then focused on eight state TANF agencies and three partner organizations with demonstrated successes in one or more of these domains, and we interviewed staff who work directly with data to learn more about what strategies they used to achieve success. We focused on what worked rather than what didn’t. From those interviews, we identified common themes, developed case studies, and generated tools to help agencies develop sustainable data practices…(More)”.