Stefaan Verhulst
Pew Research: “As the use of artificial intelligence (AI) increases rapidly, most people across 25 countries surveyed say they have heard or read at least a little about the technology.
And on balance, people are more concerned than excited about its growing presence in daily life.
A median of 34% of adults across these countries have heard or read a lot about AI, while 47% have heard a little and 14% say they’ve heard nothing at all, according to a spring 2025 Pew Research Center survey.
But many are worried about AI’s effects on daily life. A median of 34% of adults say they are more concerned than excited about the increased use of AI, while 42% are equally concerned and excited. A median of 16% are more excited than concerned…
Concerns about AI are especially common in the United States, Italy, Australia, Brazil and Greece, where about half of adults say they are more concerned than excited. But as few as 16% in South Korea are mainly concerned about the prospect of AI in their lives.

In fact, in many countries surveyed, a larger share of people are equally excited and concerned about the growing use of AI. In no country surveyed do more than three-in-ten adults say they are mainly excited.
The survey also finds a strong correlation between a country’s income – as measured by gross domestic product per capita – and awareness of AI. People in higher-income nations tend to have heard more about AI than those in less wealthy economies. For example, around half of adults in the comparatively wealthy countries of Japan, Germany, France and the U.S. have heard a lot about AI, but only 14% in India and 12% in Kenya say the same.
Trust in government to regulate AI
The survey also asked whether people trust their own country, the European Union, the U.S. and China to regulate the use of AI effectively.
Most people trust their own country to regulate AI. This includes 89% of adults in India, 74% in Indonesia and 72% in Israel. At the other end of the spectrum, only 22% of Greeks trust their country to regulate AI effectively.
Americans are almost evenly divided between trust in their country to regulate AI (44%) and distrust (47%)…(More)”.
Paper by Luisa Kruse and Max von Grafenstein: “The European data strategy aims to make the EU a leader in a data-driven world. To this aim, the EU is creating a single market for data where 1) data can flow across sectors for the benefit of all; 2) European laws like data protection and competition law are fully respected; and 3) the rules for access and use of data are fair, practical and clear. In order to structure the corresponding initiatives of legislators and public authorities, it is important to clarify the data ownership models on which the initiatives are based: Proprietary data models, open data models or so-called data commons models. Based on a literature analysis, this article first provides an overview of the discussed economic and social advantages and disadvantages of proprietary and open data models and, against this background, clarifies the concept of the data commons. In doing so, this article understands the data commons concept to mean that everyone has an equal right in principle to exploit the value of data and control its associated risks. Based on this understanding, purely technical power of the data holder to exclude others from “her” data does not mean that she has a superior or even exclusive right to generate value from the data. By means of legal mechanisms, the competent legislator or public authorities may therefore counteract such purely de facto powers of data holders by opening their technical access control over data for other parties and define the conditions of its use. In doing so, the interests of the data holder in keeping the data for themselves must be weighed up against the interests of data users in using the data as well as the interests in controlling the related risks of all parties affected by this use. While this balancing exercise may be established, in a more or less general manner, by the European or national legislator or even by municipalities, data intermediaries will have to play a central role in ensuring that this balancing of interest is resolved in specific cases. Data intermediaries may do this not only by specifying the general data usage rules provided by the legislators and municipalities in the form of context-specific access and use conditions but above all by monitoring compliance with these conditions…(More)”.
Essay by Stefaan G. Verhulst: “Generative Artificial Intelligence (AI) has given millions of people something extraordinary: a way to move forward from a tabula rasa. The once-intimidating blank page — whether in writing, coding, designing, or imagining — no longer induces paralysis. With a few keystrokes, ideas flow, drafts appear, and tasks that once demanded hours of toil now unfold in seconds. Never before has human creativity been so efficiently scaffolded.

Yet this newfound fluency comes at a cost. In outsourcing the burden of beginnings, we risk dulling the more subtle and essential capacity of inquiry — the art of asking good questions. As generative systems like ChatGPT, Gemini, and Claude proliferate, they seem to provide answers for everything. But these systems, for all their prowess, only ever respond to prompts, and they do not provide intentions or understandings behind their replies. They do not think; they predict. What appears as insight is, in fact, the statistical sequencing of tokens — the next likely word rather than a considered idea. In short, generative AI simulates reasoning without ever engaging in it.
All of this means that the value of the work generated by these remarkable systems continue to depend critically on the quality of the questions we ask–the prompts we pose, and the nudges we offer to coax alternative statistical pathways. We are, in short, increasingly living in an era that is answer rich, yet questions poor. The risk is that we overlook this, in the process losing our critical — and deeply human — capacity to ask the questions that truly matter…(More)”.
Article by Dane Gambrell: “On a sweltering August afternoon in Williamsburg, Brooklyn, technologist Chris Whong(opens in new window) led a small group of researchers, students, and local community members on an unusual walking tour. We weren’t visiting the neighborhood’s trendy restaurants or thrift shops. Instead, we were hunting for overlooked public spaces: pocket parks(opens in new window), street plazas, and other spaces that many New Yorkers walk past without even realizing they’re open to the public.
Our map for this expedition was a new app called NYC Public Space(opens in new window). Whong, a former public servant in NYC’s Department of City Planning, built the platform using generative AI tools to write code he didn’t know how to write himself – a practice often called “vibe coding(opens in new window).” The result is a searchable dataset and map of roughly 2,800 public spaces across New York City, from massive green spaces like Flushing Meadows–Corona Park to tiny triangular plazas you’ve probably never noticed.
New York City has no shortage(opens in new window) of places to sit, relax, or eat lunch outside. The city’s public realm includes more than 2,000 parks, hundreds of street plazas, playgrounds, and waterfront areas, as well as roughly 600 privately owned public spaces (POPS) created by developers in exchange for zoning benefits.
What it lacks is an easy way for people to discover these spaces. Some public spaces appear on Google Maps or Apple Maps, but many don’t. Even when they do, it’s often unclear what amenities they offer and whether they’re actually publicly accessible. You might walk by a building in your neighborhood every day but have no idea that it contains a courtyard or indoor plaza open to the public…(More)”
Article by Maxim Samson: “Mountains, meridians, rivers, and borders—these are some of the features that divide the world on our maps and in our minds. But geography is far less set in stone than we might believe, and, as Maxim Samson’s Earth Shapers contends, in our relatively short time on this planet, humans have become experts at fundamentally reshaping our surroundings.
From the Qhapaq Ñan, the Inca’s “great road,” and Mozambique’s colonial railways to a Saudi Arabian smart city, and from Korea’s sacred Baekdu-daegan mountain range and the Great Green Wall in Africa to the streets of Chicago, Samson explores how we mold the world around us. And how, as we etch our needs onto the natural landscape, we alter the course of history. These fascinating stories of connectivity show that in our desire to make geographical connections, humans have broken through boundaries of all kinds, conquered treacherous terrain, and carved up landscapes. We crave linkages, and though we do not always pay attention to the in-between, these pathways—these ways of “earth shaping,” in Samson’s words—are key to understanding our relationship with the planet we call home.
An immense work of cultural geography touching on ecology, sociology, history, and politics, Earth Shapers argues that, far from being constrained by geography, we are instead its creators…(More)”.
Paper by Mattia Mazzoli et al: “The pandemic served as an important test case of complementing traditional public health data with non-traditional data (NTD) such as mobility traces, social media activity, and wearables data to inform decision-making. Drawing on an expert workshop and a targeted survey of European modelers, we assess the promise and persistent limitations of such data in pandemic preparedness and response. We distinguish between “first-mile” (accessing and harmonizing data) and “last-mile” challenges (translating insights into actionable interventions). The expert workshop held in 2024 brought together participants from public health, academia, policymakers, and industry to reflect on lessons learned and define strategies for translating NTD insights into policy making. The survey offers evidence of the barriers faced during COVID-19 and highlights key data unavailability and underuse. Our findings reveal ongoing issues with data access, quality, and interoperability, as well as institutional and cognitive barriers to evidence-based decision-making. Around 66% of datasets suffered access problem, with data sharing reluctance for NTD being double that of traditional data (30% vs 15%). Only 10% reported they could use all the data they needed. We propose a set of recommendations: for first-mile challenges, solutions focus on technical and legal frameworks for data access.; for last-mile challenges, we recommend fusion centers, decision accelerator labs, and networks of scientific ambassadors to bridge the gap between analysis and action. Realizing the full value of NTD requires a sustained investment in institutional readiness, cross-sectoral collaboration, and a shift toward a culture of data solidarity. Grounded in the lessons of COVID-19, the article can be used to design a roadmap for using NTD to confront a broader array of public health emergencies, from climate shocks to humanitarian crises…(More)”
Essay by Dominic Burbidge: “…The starting point for AI ethics must be the recognition that AI is a simple and limited instrument. Until we master this point, we cannot hope to work back toward a type of ethics that best fits the industry.
Unfortunately, we are constantly being bombarded with the exact opposite: an image of AI as neither simple nor limited. We are told instead that AI is an all-purpose tool that is now taking over everything. There are two prominent versions of this image and both are misguided.
The first is the appeal of the technology’s exponential improvement. Moore’s Law is a good example of this kind of widespread sentiment, a law that more or less successfully predicted that the number of transistors in an integrated circuit would double approximately every two years. That looks like a lot, but remember: all you have in front of you is more transistors. The curve of exponential change looks impressive on a graph, but really the most important change was when we had no transistors and then William Shockley, John Bardeen, and Walter Brattain invented one. The multiple of change from zero to one is infinite, so any subsequent “exponential” rate of change is a climb-down from that original invention.
When technology becomes faster, smaller, or lighter, it gives us the impression of ever-faster change, but all we are really doing is failing to come up with new inventions, such that we have to rely on reworking and remarketing our existing products. That is not exactly progress of the innovative kind, and it by no means suggests that a given technology is unlimited in future potential.
The second argument we often hear is that AI is taking on more and more tasks, which is why it is unlimited in a way that is different from other, more single-use technologies of the past. We are also told that AI is likely to adopt ever more cognitively demanding activities, which seems to be further proof of its open-ended possibilities.
This is sort of true but actually a rather banal point, in the sense that technologies typically take on more and more uses than the original designers could have expected. But that is not evidence that the technology itself has changed. The commercially available microwave oven, for example, came about when American electrical engineer Percy Spencer developed it from British radar technology used in the Second World War, allegedly discovering the heating effect when the candy in his pocket melted in front of a radar set. So technology shifts and reapplies itself, and in this way naturally takes on all kinds of unexpected uses. But new uses of something does not mean its possible uses will be infinite…(More)”.
Paper by Nicolas Steinacker-Olsztyn, Devashish Gosain, Ha Dao: “Large Language Models (LLMs) are increasingly relying on web crawling to stay up to date and accurately answer user queries. These crawlers are expected to honor robots.txt files, which govern automated access. In this study, for the first time, we investigate whether reputable news websites and misinformation sites differ in how they configure these files, particularly in relation to AI crawlers. Analyzing a curated dataset, we find a stark contrast: 60.0% of reputable sites disallow at least one AI crawler, compared to just 9.1% of misinformation sites in their robots.txt files. Reputable sites forbid an average of 15.5 AI user agents, while misinformation sites prohibit fewer than one. We then measure active blocking behavior, where websites refuse to return content when HTTP requests include AI crawler user agents, and reveal that both categories of websites utilize it. Notably, the behavior of reputable news websites in this regard aligns more closely with their declared robots.txt directive than that of misinformation websites. Finally, our longitudinal analysis reveals that this gap has widened over time, with AI-blocking by reputable sites rising from 23% in September 2023 to nearly 60% by May 2025. Our findings highlight a growing asymmetry in content accessibility that may shape the training data available to LLMs, raising essential questions for web transparency, data ethics, and the future of AI training practices…(More)”
Article by Andy Masley: “Suppose I want to run my own tiny AI model. Not one a lab made, just my own model on a personal device in my home. I go out and buy a second very small computer to run it. I use it a lot. Each day, I ask 100 questions to my mini AI model. Each prompt uses about ten times as much energy as a Google search, but a Google search is so tiny that the prompt also uses a tiny amount of energy. All together, my 100 prompts use the same energy as running a microwave for 4 minutes, or playing a video game for about 10 minutes.
Sending 100 prompts to this AI model every single day adds 1/1000th to my daily emissions.

Is what I’m doing wrong?
I think the average person would say no. This is such a tiny addition that I personally should be able to decide whether it’s worthwhile for me. We don’t go around policing whether people have used microwaves a few seconds too long, or played a video game for a few minutes too long. Why try to decide whether it’s evil for me to spend a tiny fraction of my daily energy on a computer program I personally think is valuable? …
All these tiny hyper-optimized computers in a single central location, serving hundreds of thousands of people at once, is what a data center is. Data centers are concentrations of hyper-efficient computer processes that no one would have any issue with at all if they were in the homes of the individual people using them. If you wouldn’t have a problem with these tiny computers, you shouldn’t have a problem with data centers either. The only difference is the physical location of the computers themselves, and the fact that these tiny computers are actually combined into larger ones to serve multiple people at once. The only reason they stand out are that these processes are concentrated in one building, which makes them look large if you don’t consider how many people are using them. If instead you see data centers as what they really are, building-sized hyper-efficient computers that hundreds of thousands of people are using at any given moment, they stop looking bad for the environment. In fact, they are the most energy-efficient way to do large scale computing, and computing is already very energy efficient. The larger the data center, the more energy-efficient it is…(More)”.
Google Cloud: “A year and a half ago…we published this list for the first time. It numbered 101 entries.
It felt like a lot at the time, and served as a showcase of how much momentum both Google and the industry were seeing around generative AI adoption. In the brief period then of gen AI being widely available, organizations of all sizes had begun experimenting with it and putting it into production across their work and across the world, doing so at a speed rarely seen with new technology.
What a difference these past months have made. Our list has now grown by 10X. And still, that’s just scratching the surface of what’s becoming possible with AI across the enterprise, or what might be coming in the next year and a half. [Looking for how to build AI use cases just like these? Check out our handy guide with 101 technical blueprints from some of these real-world examples.]
- Arizona Health Care Cost Containment System (AHCCCS), Arizona’s Medicaid agency serving more than 2 million people, built an Opioid Use Disorder Service Provider Locator using Vertex AI and Gemini to connect residents with local treatment options. Since its late 2021 launch, the platform has reached over 20,000 unique individuals across 120 Arizona cities with a 55%+ engaged session rate.
- *Arizona State University‘s ASU Prep division developed Archie, a Gemini-powered chatbot that provides real-time math tutoring support for middle school and high school students. The AI tutor identifies errors, provides hints and guidance, and increased students’ first-attempt correct answers by 6%.
- CareerVillage is building an app called Coach to empower job seekers, especially underrepresented youth, in their career preparedness; already featuring 35 career development activities, the aim is to have more than 100 by next year.
- The Minnesota Division of Driver and Vehicle Services helps non-English speakers get licenses and other services with two-way, real-time translation.
- mRelief has built an SMS-accessible AI chatbot to simplify the application process for the SNAP food assistance program in the U.S., featuring easy-to-understand eligibility information and direct assistance within minutes rather than days.
- *Nanyang Technological University, Singapore, deployed the Lyon Housing chatbot using Dialogflow CX and Gemini to handle student housing queries. The generative AI solution enhances student experience and saves the customer service workforce more than 100 hours per month.
- The Qatari Ministry of Labour has launched “Ouqoul,” an AI-powered platform designed to connect expatriate university graduates with job opportunities in the private sector. This platform streamlines the hiring process by integrating AI-driven candidate matching with ministry services for contract authentication and work permit issuance.
- Tabiya has built a conversational interface, Compass, that helps young people find employment opportunities; the platform asks questions and requests information, drawing out skills and experiences and matching those to appropriate roles.
- *The University of Hawaii System uses Vertex AI and BigQuery to analyze labor market data and build the Hawaii Career Pathways platform, which evaluates student qualifications and interests to create customized profiles. Gemini provides personalized guidance to align students’ academic paths with career opportunities in Hawaii, helping retain graduates in the state.
- *The University of Hawaii System also uses Google Translate to communicate with Pacific Islander students in their native languages, including Hawaiian, Māori, Samoan, Tongan, Cook Islands Māori, Cantonese, and Marshallese. The AI makes career guidance and communication more accessible to the diverse student population.
- *West Sussex County Council, which serves 890,000 residents in the UK, uses Dialogflow to power an online chatbot that engages residents in real-time conversations to determine eligibility for adult social care services and benefits. The conversational AI helps residents quickly understand their qualification status among the 5% of inquiries that actually qualify, reducing pressure on the Customer Service Center.
- The Fulton Theatre cuts grant-writing time in half by using Gemini in Docs to fill in routine information, helping the team focus on growing the theatre and putting on shows that bring communities together…(More)”.