How a new platform is future-proofing governance for the intelligent age


Article by Kelly Ommundsen: “We are living through one of the most transformative moments in human history. Technologies like artificial intelligence (AI), quantum computing and synthetic biology are accelerating change at a pace few institutions are prepared to manage. Yet while innovation is leaping forward, regulation often remains standing still – constrained by outdated models, fragmented approaches and a reactive mindset…

To address this growing challenge, the World Economic Forum, in collaboration with the UAE’s General Secretariat of the Cabinet, has launched the Global Regulatory Innovation Platform (GRIP).

GRIP is a new initiative designed to foster human-centred, forward-looking and globally coordinated approaches to regulation. Its goal: to build trust, reduce uncertainty and accelerate innovation that serves the public good.

This platform builds on the World Economic Forum’s broader body of work on agile governance. As outlined in the Forum’s 2020 report, Agile Governance: Reimagining Policy-making in the Fourth Industrial Revolution, traditional regulatory approaches – characterized by top-down control and infrequent updates – are increasingly unfit for the pace, scale and complexity of modern technological change…(More)”.

Sudden loss of key US satellite data could send hurricane forecasting back ‘decades’


Article by Eric Holthaus: “A critical US atmospheric data collection program will be halted by Monday, giving weather forecasters just days to prepare, according to a public notice sent this week. Scientists that the Guardian spoke with say the change could set hurricane forecasting back “decades”, just as this year’s season ramps up.

In a National Oceanic and Atmospheric Administration (Noaa) message sent on Wednesday to its scientists, the agency said that “due to recent service changes” the Defense Meteorological Satellite Program (DMSP) will “discontinue ingest, processing and distribution of all DMSP data no later than June 30, 2025”.

Due to their unique characteristics and ability to map the entire world twice a day with extremely high resolution, the three DMSP satellites are a primary source of information for scientists to monitor Arctic sea ice and hurricane development. The DMSP partners with Noaa to make weather data collected from the satellites publicly available.

The reasons for the changes, and which agency was driving them, were not immediately clear. Noaa said they would not affect the quality of forecasting.

However, the Guardian spoke with several scientists inside and outside of the US government whose work depends on the DMSP, and all said there are no other US programs that can form an adequate replacement for its data.

“We’re a bit blind now,” said Allison Wing, a hurricane researcher at Florida State University. Wing said the DMSP satellites are the only ones that let scientists see inside the clouds of developing hurricanes, giving them a critical edge in forecasting that now may be jeopardized.

“Before these types of satellites were present, there would often be situations where you’d wake up in the morning and have a big surprise about what the hurricane looked like,” said Wing. “Given increases in hurricane intensity and increasing prevalence towards rapid intensification in recent years, it’s not a good time to have less information.”..(More)”.

Understanding Technology and Society


Book by Todd L. Pittinsky: “From the early days of navigating the world with bare hands to harnessing tools that transformed stones and sticks, human ingenuity has birthed science and technology. As societies expanded, the complexity of our tools grew, raising a crucial question: Do we control them, or do they dictate our fate? The trajectory of science and technology isn’tpredetermined; debates and choices shape it. It’s our responsibility to navigate wisely, ensuring technology betters, not worsens, our world. This book explores the complex nature of this relationship, with 18 chapters posing and discussing a compelling ‘big question.’ Topics discussed include technology’s influence on child development; big data; algorithms; democracy; happiness; the interplay of sex, gender, and science in its development; international development efforts; robot consciousness; and the future of human labor in an automated world. Think critically. Take a stand. With societal acceleration mirroring technological pace, the challenge is, can we keep up?…(More)”.

The Smart City as a Field of Innovation: Effects of Public-Private Data Collaboration on the Innovation Performance of Small and Medium-Sized Enterprises in China


Paper by xiaohui jiang and Masaru Yarime: “The Chinese government has been playing an important role in stimulating innovation among Chinese enterprises. Small and medium-sized enterprises (SMEs), with their limited internal resources, particularly face a severe challenge in implementing innovation activities that depend upon data, funding sources, and talents. However, the rapidly developing smart city projects in China, where significant amounts of data are available from various sophisticated devices and generous funding opportunities, are providing rich opportunities for SMEs to explore data-driven innovation. Chinese Governments are trying to actively engage SMEs in the process of smart city construction. When cooperating with the government, the availability of and access to data involved in the government contracts and the ability required in the project help SMEs to train and improve their innovation ability.In this article, we intend to address how obtaining different types of government contracts (equipment supply, platform building, data analysis) can influence firms’ performance on innovation. Obtaining different types of government contracts are regarded as receiving different types of treatments. The hypothesis is that the data analysis type of contracts has a larger positive influence on improving the innovation ability compared to the platform building type, while the platform building type of contracts can have a larger influence compared to equipment supply. Focusing on the case of SMEs in China, this research aims to shed light on how the government and enterprises collaborate in smart city projects to facilitate innovation. Data on companies’ registered capital, industry, and software products from 1990– 2020 is compiled from the Tianyancha website. A panel dataset is established with the key characteristics of the SMEs, software productions, and their record on government contracts. Based on the company’s basic characteristics, we divided six pairs of treatment and control groups using propensity score matching (PSM) and then ran a validity test to confirm that the result of the division was reliable. Then based on the established control and treatment pairs, we run a difference-in-difference (DID) model, and the result supports our original hypothesis. The statistics shows mixed result, Hypothesis 1 which indicates that companies obtaining data analysis contracts will experience greater innovation improvements compared to those with platform-building contracts, is partially confirmed when using software copyright as an outcome variable. However, when using patent data as an indicator, the statistics is insignificant. Hypothesis 2, which posits that companies with platform-building contracts will show greater innovation improvements than those with equipment supply contracts, is not supported. Hypothesis 3 which suggests that companies receiving government contracts will have higher innovation outputs than those without, is confirmed. The case studies later have revealed the complex mechanisms behind the scenario…(More)”.

Unpacking OpenAI’s Amazonian Archaeology Initiative


Article by Lori Regattieri: “What if I told you that one of the most well-capitalized AI companies on the planet is asking volunteers to help them uncover “lost cities” in the Amazonia—by feeding machine learning models with open satellite data, lidar, “colonial” text and map records, and indigenous oral histories? This is the premise of the OpenAI to Z Challenge, a Kaggle-hosted hackathon framed as a platform to “push the limits” of AI through global knowledge cooperation. In practice, this is a product development experiment cloaked as public participation. The contributions of users, the mapping of biocultural data, and the modeling of ancestral landscapes all feed into the refinement of OpenAI’s proprietary systems. The task itself may appear novel. The logic is not. This is the familiar playbook of Big Tech firms—capture public knowledge, reframe it as open input, and channel it into infrastructure that serves commercial, rather than communal goals.

The “challenge” is marketed as a “digital archaeology” experiment, it invites participants from all around the world to search for “hidden” archaeological sites in the Amazonia biome (Brazil, Bolivia, Columbia, Ecuador, Guyana, Peru, Suriname, Venezuela, and French Guiana) using a curated stack of open-source data. The competition requires participants to use OpenAI’s latest GPT-4.1 and the o3/o4-mini models to parse multispectral satellite imagery, LiDAR-derived elevation maps (Light Detection and Ranging is a remote sensing technology that uses laser pulses to generate high-resolution 3D models of terrain, including areas covered by dense vegetation), historical maps, and digitized ethnographic archives. The coding teams or individuals need to geolocate “potential” archaeological sites, argue their significance using verifiable public sources, and present reproducible methodologies. Prize incentives total $400,000 USD, with a first-place award of $250,000 split between cash and OpenAI API credits.

While framed as a novel invitation to “anyone” to do archaeological research, the competition focuses mainly on the Brazilian territory, transforming the Amazonia and its peoples into an open laboratory for model testing. What is presented as scientific crowdsourcing is in fact a carefully designed mechanism for refining geospatial AI at scale. Participants supply not just labor and insight, but novel training and evaluation strategies that extend far beyond heritage science and into the commercial logics of spatial computing…(More)”.

Will AI speed up literature reviews or derail them entirely?


Article by Sam A. Reynolds: “Over the past few decades, evidence synthesis has greatly increased the effectiveness of medicine and other fields. The process of systematically combining findings from multiple studies into comprehensive reviews helps researchers and policymakers to draw insights from the global literature1. AI promises to speed up parts of the process, including searching and filtering. It could also help researchers to detect problematic papers2. But in our view, other potential uses of AI mean that many of the approaches being developed won’t be sufficient to ensure that evidence syntheses remain reliable and responsive. In fact, we are concerned that the deployment of AI to generate fake papers presents an existential crisis for the field.

What’s needed is a radically different approach — one that can respond to the updating and retracting of papers over time.

We propose a network of continually updated evidence databases, hosted by diverse institutions as ‘living’ collections. AI could be used to help build the databases. And each database would hold findings relevant to a broad theme or subject, providing a resource for an unlimited number of ultra-rapid and robust individual reviews…

Currently, the gold standard for evidence synthesis is the systematic review. These are comprehensive, rigorous, transparent and objective, and aim to include as much relevant high-quality evidence as possible. They also use the best methods available for reducing bias. In part, this is achieved by getting multiple reviewers to screen the studies; declaring whatever criteria, databases, search terms and so on are used; and detailing any conflicts of interest or potential cognitive biases…(More)”.

Mapping the Unmapped


Article by Maddy Crowell: “…Most of St. Lucia, which sits at the southern end of an archipelago stretching from Trinidad and Tobago to the Bahamas, is poorly mapped. Aside from strips of sandy white beaches that hug the coastline, the island is draped with dense rainforest. A few green signs hang limp and faded from utility poles like an afterthought, identifying streets named during more than a century of dueling British and French colonial rule. One major road, Micoud Highway, runs like a vein from north to south, carting tourists from the airport to beachfront resorts. Little of this is accurately represented on Google Maps. Almost nobody uses, or has, a conventional address. Locals orient one another with landmarks: the red house on the hill, the cottage next to the church, the park across from Care Growell School.

Our van wound off Micoud Highway into an empty lot beneath the shade of a banana tree. A dog panted, belly up, under the hot November sun. The group had been recruited by the Humanitarian OpenStreetMap Team, or HOT, a nonprofit that uses an open-source data platform called OpenStreetMap to create a map of the world that resembles Google’s with one key exception: Anyone can edit it, making it a sort of Wikipedia for cartographers.

The organization has an ambitious goal: Map the world’s unmapped places to help relief workers reach people when the next hurricanefire, or other crisis strikes. Since its founding in 2010, some 340,000 volunteers around the world have been remotely editing OpenStreetMap to better represent the Caribbean, Southeast Asia, parts of Africa and other regions prone to natural disasters or humanitarian emergencies. In that time, they have mapped more than 2.1 million miles of roads and 156 million buildings. They use aerial imagery captured by drones, aircraft, or satellites to help trace unmarked roads, waterways, buildings, and critical infrastructure. Once this digital chart is more clearly defined, field-mapping expeditions like the one we were taking add the names of every road, house, church, or business represented by gray silhouettes on their paper maps. The effort fine-tunes the places that bigger players like Google Maps get wrong — or don’t get at all…(More)”

Why are “missions” proving so difficult?


Article by James Plunkett: “…Unlike many political soundbites, however, missions have a strong academic heritage, drawing on years of work from Mariana Mazzucato and others. They gained support as a way for governments to be less agnostic about the direction of economic growth and its social implications, most obviously on issues like climate change, while still avoiding old-school statism. The idea is to pursue big goals not with top-down planning but with what Mazzucato calls ‘orchestration’, using the power of the state to drive innovation and shape markets to an outcome.

For these reasons, missions have proven increasingly popular with governments. They have been used by administrations from the EU to South Korea and Finland, and even in Britain under Theresa May, although she didn’t have time to make them stick.

Despite these good intentions and heritage, however, missions are proving difficult. Some say the UK government is “mission-washing” – using the word, but not really adopting the ways of working. And although missions were mentioned in the spending review, their role was notably muted when compared with the central position they had in Labour’s manifesto.

Still, it would seem a shame to let missions falter without interrogating the reasons. So why are missions so difficult? And what, if anything, could be done to strengthen them as Labour moves into year two? I’ll touch on four characteristics of missions that jar with Whitehall’s natural instincts, and in each case I’ll ask how it’s going, and how Labour could be bolder…(More)”.

This new cruise-ship activity is surprisingly popular


Article by Brian Johnston: “Scientists are always short of research funds, but the boom in the popularity of expedition cruising has given them an unexpected opportunity to access remote places.

Instead of making single, expensive visits to Antarctica, for example, scientists hitch rides on cruise ships that make repeat visits and provide the opportunity for data collection over an entire season.

Meanwhile, cruise passengers’ willingness to get involved in a “citizen science” capacity is proving invaluable for crowdsourcing data on everything from whale migration and microplastics to seabird populations. And it isn’t only the scientists who benefit. Guests get a better insight into the environments in which they sail, and feel that they’re doing their bit to understand and preserve the wildlife and landscapes around them.

Citizen-science projects produce tangible results, among them that ships in Antarctica now sail under 10 knots after a study showed that, at that speed, whales have a far greater chance of avoiding or surviving ship strikes. In 2023 Viking Cruises encountered rare giant phantom jellyfish in Antarctica, and in 2024 discovered a new chinstrap penguin colony near Antarctica’s Astrolabe Island.

Viking’s expedition ships have a Science Lab and the company works with prestigious partners such as the Cornell Lab of Ornithology and Norwegian Polar Institute. Expedition lines with visiting scientist programs include Chimu Adventures, Lindblad Expeditions and Quark Expeditions, which works with Penguin Watch to study the impact of avian flu…(More)”.

Red Teaming Artificial Intelligence for Social Good


UNESCO Report: “Generative Artificial Intelligence (Gen AI) has become an integral part of our digital landscape and daily life. Understanding its risks and participating in solutions is crucial to ensuring that it works for the overall social good. This PLAYBOOK introduces Red Teaming as an accessible tool for testing and evaluating AI systems for social good, exposing stereotypes, bias and potential harms. As a way of illustrating harms, practical examples of Red Teaming for social good are provided, building on the collaborative work carried out by UNESCO and Humane Intelligence. The results demonstrate forms of technology-facilitated gender-based violence (TFGBV) enabled by Gen AI and provide practical actions and recommendations on how to address these growing concerns.

Red Teaming — the practice of intentionally testing Gen AI models to expose vulnerabilities — has traditionally been used by major tech companies and AI labs. One tech company surveyed 1,000 machine learning engineers and found that 89% reported vulnerabilities (Aporia, 2024). This PLAYBOOK provides access to these critical testing methods, enabling organizations and communities to actively participate. Through the structured exercises and real-world scenarios provided, participants can systematically evaluate how Gen AI models may perpetuate, either intentionally or unintentionally, stereotypes or enable gender-based violence.By providing organizations with this easy-to-use tool to conduct their own Red Teaming exercises, participants can select their own thematic area of concern, enabling evidence-based advocacy for more equitable AI for social good…(More)”.