Stefaan Verhulst
Paper by Nicolas Steinacker-Olsztyn, Devashish Gosain, Ha Dao: “Large Language Models (LLMs) are increasingly relying on web crawling to stay up to date and accurately answer user queries. These crawlers are expected to honor robots.txt files, which govern automated access. In this study, for the first time, we investigate whether reputable news websites and misinformation sites differ in how they configure these files, particularly in relation to AI crawlers. Analyzing a curated dataset, we find a stark contrast: 60.0% of reputable sites disallow at least one AI crawler, compared to just 9.1% of misinformation sites in their robots.txt files. Reputable sites forbid an average of 15.5 AI user agents, while misinformation sites prohibit fewer than one. We then measure active blocking behavior, where websites refuse to return content when HTTP requests include AI crawler user agents, and reveal that both categories of websites utilize it. Notably, the behavior of reputable news websites in this regard aligns more closely with their declared robots.txt directive than that of misinformation websites. Finally, our longitudinal analysis reveals that this gap has widened over time, with AI-blocking by reputable sites rising from 23% in September 2023 to nearly 60% by May 2025. Our findings highlight a growing asymmetry in content accessibility that may shape the training data available to LLMs, raising essential questions for web transparency, data ethics, and the future of AI training practices…(More)”
Article by Andy Masley: “Suppose I want to run my own tiny AI model. Not one a lab made, just my own model on a personal device in my home. I go out and buy a second very small computer to run it. I use it a lot. Each day, I ask 100 questions to my mini AI model. Each prompt uses about ten times as much energy as a Google search, but a Google search is so tiny that the prompt also uses a tiny amount of energy. All together, my 100 prompts use the same energy as running a microwave for 4 minutes, or playing a video game for about 10 minutes.
Sending 100 prompts to this AI model every single day adds 1/1000th to my daily emissions.

Is what I’m doing wrong?
I think the average person would say no. This is such a tiny addition that I personally should be able to decide whether it’s worthwhile for me. We don’t go around policing whether people have used microwaves a few seconds too long, or played a video game for a few minutes too long. Why try to decide whether it’s evil for me to spend a tiny fraction of my daily energy on a computer program I personally think is valuable? …
All these tiny hyper-optimized computers in a single central location, serving hundreds of thousands of people at once, is what a data center is. Data centers are concentrations of hyper-efficient computer processes that no one would have any issue with at all if they were in the homes of the individual people using them. If you wouldn’t have a problem with these tiny computers, you shouldn’t have a problem with data centers either. The only difference is the physical location of the computers themselves, and the fact that these tiny computers are actually combined into larger ones to serve multiple people at once. The only reason they stand out are that these processes are concentrated in one building, which makes them look large if you don’t consider how many people are using them. If instead you see data centers as what they really are, building-sized hyper-efficient computers that hundreds of thousands of people are using at any given moment, they stop looking bad for the environment. In fact, they are the most energy-efficient way to do large scale computing, and computing is already very energy efficient. The larger the data center, the more energy-efficient it is…(More)”.
Google Cloud: “A year and a half ago…we published this list for the first time. It numbered 101 entries.
It felt like a lot at the time, and served as a showcase of how much momentum both Google and the industry were seeing around generative AI adoption. In the brief period then of gen AI being widely available, organizations of all sizes had begun experimenting with it and putting it into production across their work and across the world, doing so at a speed rarely seen with new technology.
What a difference these past months have made. Our list has now grown by 10X. And still, that’s just scratching the surface of what’s becoming possible with AI across the enterprise, or what might be coming in the next year and a half. [Looking for how to build AI use cases just like these? Check out our handy guide with 101 technical blueprints from some of these real-world examples.]
- Arizona Health Care Cost Containment System (AHCCCS), Arizona’s Medicaid agency serving more than 2 million people, built an Opioid Use Disorder Service Provider Locator using Vertex AI and Gemini to connect residents with local treatment options. Since its late 2021 launch, the platform has reached over 20,000 unique individuals across 120 Arizona cities with a 55%+ engaged session rate.
- *Arizona State University‘s ASU Prep division developed Archie, a Gemini-powered chatbot that provides real-time math tutoring support for middle school and high school students. The AI tutor identifies errors, provides hints and guidance, and increased students’ first-attempt correct answers by 6%.
- CareerVillage is building an app called Coach to empower job seekers, especially underrepresented youth, in their career preparedness; already featuring 35 career development activities, the aim is to have more than 100 by next year.
- The Minnesota Division of Driver and Vehicle Services helps non-English speakers get licenses and other services with two-way, real-time translation.
- mRelief has built an SMS-accessible AI chatbot to simplify the application process for the SNAP food assistance program in the U.S., featuring easy-to-understand eligibility information and direct assistance within minutes rather than days.
- *Nanyang Technological University, Singapore, deployed the Lyon Housing chatbot using Dialogflow CX and Gemini to handle student housing queries. The generative AI solution enhances student experience and saves the customer service workforce more than 100 hours per month.
- The Qatari Ministry of Labour has launched “Ouqoul,” an AI-powered platform designed to connect expatriate university graduates with job opportunities in the private sector. This platform streamlines the hiring process by integrating AI-driven candidate matching with ministry services for contract authentication and work permit issuance.
- Tabiya has built a conversational interface, Compass, that helps young people find employment opportunities; the platform asks questions and requests information, drawing out skills and experiences and matching those to appropriate roles.
- *The University of Hawaii System uses Vertex AI and BigQuery to analyze labor market data and build the Hawaii Career Pathways platform, which evaluates student qualifications and interests to create customized profiles. Gemini provides personalized guidance to align students’ academic paths with career opportunities in Hawaii, helping retain graduates in the state.
- *The University of Hawaii System also uses Google Translate to communicate with Pacific Islander students in their native languages, including Hawaiian, Māori, Samoan, Tongan, Cook Islands Māori, Cantonese, and Marshallese. The AI makes career guidance and communication more accessible to the diverse student population.
- *West Sussex County Council, which serves 890,000 residents in the UK, uses Dialogflow to power an online chatbot that engages residents in real-time conversations to determine eligibility for adult social care services and benefits. The conversational AI helps residents quickly understand their qualification status among the 5% of inquiries that actually qualify, reducing pressure on the Customer Service Center.
- The Fulton Theatre cuts grant-writing time in half by using Gemini in Docs to fill in routine information, helping the team focus on growing the theatre and putting on shows that bring communities together…(More)”.
About: “…a website that uses AI to track and map protests and activist movements around the world. The goal is to provide a clean, visual representation of these events for researchersA non-profit project mapping protests and civic actions globally. We aggregate publicly reported events to help researchers, journalists and communities understand patterns of civic voice…(More)”.

Article by Viktor Mayer-Schönberger: “Much is being said about AI governance these days – by experts and pundits, lobbyists, journalists and policymakers. Convictions run high. Fundamental values are said to be at stake. Increasingly, AI governance statutes are passed. But what do we really mean – or ought to mean – when we speak about AI governance? In the West, at least three distinct categories of meaning can be identified.
The first is by far the most popular and it comes in many different variations and flavours. It’s what has been driving recent legislation, such as the European Union AI Act. And perhaps surprisingly, it has quite little to do with artificial intelligence. Its proponents scrutinise output of AI processing, and find the output wanting, for a variety of reasons. …But expecting near perfection from machines while accepting much less from humans does not lead to better outcomes overall. Rather, it keeps us stuck with more flawed, albeit human outputs… Moreover, terms such as ‘fair’ and ‘responsible’, frequently used in such AI governance debates, offer the advantage of vast interpretative flexibility, facilitating their use by many groups in support of their very diverse agendas. These different AI governance voices mean very different things, when they use the same words – and from their vantage point that’s more often a feature than a bug, because it gives them and their cause anchorage in the public debates.
The second flavour of AI governance offers a very different take. By focusing on the current economic landscape of digital and online services, its proponents suggest that AI governance is less novel and rather a continuation of digital and internet governance debates that have been raging for decades (Mueller Citation2025). They argue that most building blocks of AI have been around for some time – data, processing power and self-learning algorithms – and been utilised quite unevenly in the digital economy, often to the effect that large economic players got larger. ..
The third flavour of AI governance shifts the focus away from how technology affects fairness or markets, yet again. Instead, the attention is on decision-making. If AI is much about helping humans make better decisions, either by guiding them to the supposedly best choice or by choosing for them, AI governance isn’t so much about technology than about how and to what extent individual decision-making processes are shaped by outside influence. It situates the governance question apart from the specifics of a particular technology and asks: How are others, especially society, shaping and altering individual decision-making processes?…(More)”.
Essay by Andrew Sorota: “…But a quieter danger lies in wait, one that may ultimately prove more corrosive to the human spirit than any killer robot or bioweapon. The risk is that we will come to rely on AI not merely to assist us but to decide for us, surrendering ever larger portions of collective judgment to systems that, by design, cannot acknowledge our dignity.
The tragedy is that we are culturally prepared for such abdication. Our political institutions already depend on what might be called a “paradigm of deference,” in which ordinary citizens are invited to voice preferences episodically — through ballots every few years — while day-to-day decisions are made by elected officials, regulators and technical experts.
Many citizens have even come to defer their civic role entirely by abstaining from voting, whether for symbolic meaning or due to sheer apathy. AI slots neatly into this architecture, promising to supercharge the convenience of deferring while further distancing individuals from the levers of power.
Modern representative democracy itself emerged in the 18th century as a solution to the logistical impossibility of assembling the entire citizenry in one place; it scaled the ancient city-state to the continental republic. That solution carried a price: The experience of direct civic agency was replaced by periodic, symbolic acts of consent. Between elections, citizens mostly observe from the sidelines. Legislative committees craft statutes, administrative agencies draft rules, central banks decide the price of money — all with limited direct public involvement.
This arrangement has normalized an expectation that complex questions belong to specialists. In many domains, that reflex is sensible — neurosurgeons really should make neurosurgical calls. But it also primes us to cede judgment even where the stakes are fundamentally moral or distributive. The democratic story we tell ourselves — that sovereignty rests with the people — persists, but the lived reality is an elaborate hierarchy of custodians. Many citizens have internalized that gap as inevitable…(More)”.
Paper by Edoardo Loru et al: “Large Language Models (LLMs) are increasingly embedded in evaluative processes, from information filtering to assessing and addressing knowledge gaps through explanation and credibility judgments. This raises the need to examine how such evaluations are built, what assumptions they rely on, and how their strategies diverge from those of humans. We benchmark six LLMs against expert ratings—NewsGuard and Media Bias/Fact Check—and against human judgments collected through a controlled experiment. We use news domains purely as a controlled benchmark for evaluative tasks, focusing on the underlying mechanisms rather than on news classification per se. To enable direct comparison, we implement a structured agentic framework in which both models and nonexpert participants follow the same evaluation procedure: selecting criteria, retrieving content, and producing justifications. Despite output alignment, our findings show consistent differences in the observable criteria guiding model evaluations, suggesting that lexical associations and statistical priors could influence evaluations in ways that differ from contextual reasoning. This reliance is associated with systematic effects: political asymmetries and a tendency to confuse linguistic form with epistemic reliability—a dynamic we term epistemia, the illusion of knowledge that emerges when surface plausibility replaces verification. Indeed, delegating judgment to such systems may affect the heuristics underlying evaluative processes, suggesting a shift from normative reasoning toward pattern-based approximation and raising open questions about the role of LLMs in evaluative processes…(More)”
Article by Samuel Greengard: “Urban planning has always focused on improving the way people, spaces, and objects interact. Yet, translating these complex dynamics into a livable environment is remarkably difficult. Seemingly small differences in design can unleash profound impacts on the people who live in a city.
To better navigate this complexity, planners increasingly are turning to digital technology, including artificial intelligence (AI). While data-driven planning isn’t new, these tools deliver a more sophisticated framework. This evolution, referred to as algorithmic urbanism, blends traditional planning techniques with advanced analytics to address challenges like congestion, health, safety, and quality of life.
“Buildings, streets, trees, and numerous other factors influence how people move about, how economic activity takes place, and how various events unfold,” said Luis Bettencourt, professor of Ecology and Evolution and director of the Mansueto Institute for Urban Innovation at the University of Chicago. “Tools like AI and digital twins spot opportunities to rethink and reinvent things.”
This might include anything from optimizing a network of bicycle lanes to simulating zoning changes and land-use scenarios. It could also incorporate ways to improve recreation, congestion, and energy use. Yet, like other forms of AI, algorithmic urbanism introduces risks, including the potential for perpetuating historical data biases, misuse or abuse of data, and concealing how decisions take place.
The idea of using data and algorithms to design better cities extends back to the 1970s. That’s when computing tools like geographic information systems and business intelligence began to extract insights from data—and to provide more precise methods for managing urban growth.
Satellite imagery, vast databases, and environmental sensors followed. “The technology emerged as a valuable tool for strategic planning,” said Rob Kitchin, Professor of Human Geography at Maynooth University in Ireland. “It allowed planners to run detailed simulations and better understand scenarios, such as if you add a shopping mall, how will it impact traffic flow, congestion, and surrounding infrastructure.”…(More)”
Time Magazine: “The Best Inventions of 2025…Rescuing historical data helps researchers better understand and model climate change, especially in under-resourced regions. Decades-old records documenting daily precipitation and temperature were often handwritten, and MeteoSaver’s software can digitize and transcribe these records into machine-readable formats like spreadsheets alongside human scientists, speeding up the process…(More)”.
Paper by Sheikh Kamran Abid et al: “As disasters become more frequent and complex, the integration of artificial intelligence (AI) with crowdsourced data from social media is emerging as a powerful approach to enhance disaster management and community resilience. This study investigates the potential of AI-enhanced crowdsourcing to improve emergency preparedness and response. A systematic review was conducted using both qualitative and quantitative methodologies, guided by the PRISMA framework, to identify and evaluate relevant literature. The findings reveal that AI systems can effectively process real-time social media data to deliver timely alerts, coordinate emergency actions, and engage communities. Key themes explored include the effectiveness of community participation, AI’s capacity to manage large-scale information flows, and the challenges posed by misinformation, data privacy, and infrastructural limitations. The results suggest that when strategically implemented, AI-enhanced crowdsourcing can play a critical role in building adaptive and sustainable disaster management frameworks. The paper concludes with practical and policy-level recommendations for integrating these technologies into Pakistan’s disaster management systems…(More)”.